Write a Blog >>

Aim of the tutorial

Emerging non-volatile main memory (NVRAM) technologies, such as Intel’s Optane DC Persistent Memory, are becoming readily available on the market and provide desirable properties such as persistence, byte-addressability, low idle power, and improved memory-density. Due to these advantages, NVRAMs are likely to be an important component of memory hierarchies in the near future. However, there are two major challenges that NVRAMs bring to computer systems: persistent main memory and read-write asymmetry. Although persistence provides the opportunity for programs to recover after a system crash, the volatility of caches and registers requires careful reasoning about data dependencies. Secondly, due to fundamental properties of the underlying hardware, reads are much cheaper than writes in terms of both latency and throughput. These properties require researchers to rethink many aspects of how computer systems are designed, including algorithms and software.

In this tutorial, we will provide a survey of research problems posed by NVRAMs, covering both memory-level persistence and read-write asymmetry. We assume no prior knowledge of NVRAM, beginning with an overview of how these new memories work and where they t into traditional computer systems.

Outline:

The first half of this tutorial will focus using NVRAM for persistence. We use database indexes as an example of how algorithms for NVRAM can be orders of magnitude faster than algorithms for traditional persistent memories. Since CPU caches and registers have remained volatile, the challenge of designing persistent algorithms is in ensuring that the state of NVRAM is consistent after a crash and that the execution can be recovered. We begin by presenting general techniques for making sequential and parallel programs persistent. Next, we move on to concurrent programs which introduce a new set of challenges. The first challenge is dening what it means for a persistent concurrent program to be correct. We survey various machine models and correctness conditions and discuss the pros and cons of each. Next, we survey the kinds of persistence problems that have been studied and present some hand-tuned and general solutions to these problems. We conclude this section with an experimental comparison of these solutions and discuss the tradeoffs between ease of use and performance.

The second half of this tutorial will focus on write-efficiency. First, we will introduce computational models for a variety of system and algorithmic considerations that account for read-write asymmetry. Using these models, we will present both general techniques and problem-based examples illustrating how to reduce writes in algorithms. Experimental results of these techniques and algorithms will be illustrated, showing that relying on the theory actually leads to practical improvements. In particular, we will present a model for graph analytics, the Parallel Semi-Asymmetric Model (PSAM), that assumes that the vertices of the graph t in a fast read-write memory (DRAM), but the edges do not. We will discuss write-efficient parallel algorithms for a set of 18 fundamental graph problems. Experimentally, we run all of the algorithms on the largest publicly-available graph, the Hyperlink2012 web graph with over 200 billion edges, and show that our PSAM algorithms outperform the fastest prior algorithms designed for DRAM or NVRAM. Our tutorial will conclude by examining how the caching hierarchy that underlies all of the previously discussed models is affected by read-write asymmetry. We will discuss the Asymmetric Ideal-Cache model and how it is used to extend the traditional caching assumptions. Furthermore, we will provide an overview of the research that has been done on writeback-aware caching, both theoretical and practical.

Prerequisite Knowledge

Basic knowledge of memory hierarchy and caching. Familiarity with concurrent algorithms and correctness conditions.

Tentative Schedule

https://www.cs.ucr.edu/~ygu/others/ppopp2020-Tutorial.pdf

Sat 22 Feb

PPoPP-2020-Workshops-and-Tutorials
08:00 - 12:00: Workshops and Tutorials - Tutorial: Abstractions and Algorithms for Efficiently Programming NVRAMs(Sorrento)
PPoPP-2020-Workshops-and-Tutorials08:00 - 12:00
Demonstration
Naama Ben-DavidCarnegie Mellon University, Guy BlellochCarnegie Mellon University, USA, Laxman DhulipalaCarnegie Mellon University, Michal FriedmanTechnion - Israel, Yuanhao WeiCarnegie Mellon University, Yan Gu, Charles McGuffey
PPoPP-2020-Workshops-and-Tutorials
13:00 - 17:00: Workshops and Tutorials - Tutorial: Abstractions and Algorithms for Efficiently Programming NVRAMs(Sorrento)
PPoPP-2020-Workshops-and-Tutorials13:00 - 17:00
Demonstration
Naama Ben-DavidCarnegie Mellon University, Guy BlellochCarnegie Mellon University, USA, Laxman DhulipalaCarnegie Mellon University, Michal FriedmanTechnion - Israel, Yuanhao WeiCarnegie Mellon University, Yan Gu, Charles McGuffey