waveSZ: A Hardware-Algorithm Co-Design of Efficient Lossy Compression for Scientific Data
Error-bounded lossy compression is critical to the success of extreme-scale scientific research because of ever-increasing volumes of data produced by today’s high-performance computing (HPC) applications. Not only can error-controlled lossy compressors significantly reduce the I/O and storage burden but they can retain high data fidelity for post analysis. Existing state-of-the-art lossy compressors, however, generally suffer from relatively low compression and decompression throughput (up to hundreds of megabytes per second on a single CPU core), which considerably restrict the adoption of lossy compression by many HPC applications especially those with a fairly high data production rate. In this paper, we propose a highly efficient lossy compression approach based on field programmable gate arrays (FPGAs) under the state-of-the-art lossy compression model SZ. Our contributions are fourfold. (1) We adopt a wavefront memory layout to alleviate the data dependency during the prediction for higher-dimensional predictors, such as the Lorenzo predictor. (2) We propose a co-design framework named waveSZ based on the wavefront memory layout and the characteristics of SZ algorithm and carefully implement it by using high-level synthesis. (3) We propose a hardware-algorithm co-optimization method to improve the performance. (4) We evaluate our proposed waveSZ on three real-world HPC simulation datasets from the Scientific Data Reduction Benchmarks and compare it with other state-of-the-art methods on both CPUs and FPGAs. Experiments show that our waveSZ can improve SZ’s compression throughput by 6.9x ~ 8.7x over the production version running on a state-of-the-art CPU and improve the compression ratio and throughput by 2.1x and 8.3x on average, respectively, compared with the state-of-the-art FPGA design.
Mon 24 FebDisplayed time zone: Tijuana, Baja California change
10:55 - 12:35 | Machine Learning/Big Data (Mediterranean Ballroom)Main Conference Chair(s): Shuaiwen Leon Song University of Sydney | ||
10:55 25mTalk | Optimizing Batched Winograd Convolution on GPUs Main Conference Da Yan Hong Kong University of Science and Technology, Wei Wang Hong Kong University of Science and Technology, Xiaowen Chu Hong Kong Baptist University | ||
11:20 25mTalk | Taming Unbalanced Training Workloads in Deep Learning with Partial Collective Operations Main Conference Shigang Li ETH Zurich, Tal Ben-Nun Department of Computer Science, ETH Zurich, Salvatore Di Girolamo Department of Computer Science, ETH Zurich, Dan Alistarh IST Austria, Torsten Hoefler Department of Computer Science, ETH Zurich | ||
11:45 25mTalk | Scalable Top-K Retrieval with Sparta Main Conference Gali Sheffi Technion - Israel, Dmitry Basin Yahoo Research, Edward Bortnikov Yahoo Research, David Carmel Amazon, Idit Keidar Technion - Israel institute of technology | ||
12:10 25mTalk | waveSZ: A Hardware-Algorithm Co-Design of Efficient Lossy Compression for Scientific Data Main Conference Jiannan Tian University of Alabama, Sheng Di Argonne National Laboratory, Chengming Zhang University of Alabama, Xin Liang , Sian Jin University of Alabama, Dazhao Cheng University of North Carolina at Charlotte, Dingwen Tao University of Alabama, Franck Cappello Argonne National Laboratory |