Download presentation
Presentation is loading. Please wait.
Published byBrett Berry Modified over 6 years ago
1
ThyNVM Software-Transparent Crash Consistency for Persistent Memory
Onur Mutlu (joint work with Jinglei Ren, Jishen Zhao, Samira Khan, Jongmoo Choi, Yongwei Wu) August 8, 2016 Flash Memory Summit 2016, Santa Clara, CA
2
Original Paper (I)
3
Original Paper (II) Presented at ACM/IEEE MICRO Conference in Dec 2015. Full paper for details: Jinglei Ren, Jishen Zhao, Samira Khan, Jongmoo Choi, Yongwei Wu, and Onur Mutlu, "ThyNVM: Enabling Software-Transparent Crash Consistency in Persistent Memory Systems" Proceedings of the 48th International Symposium on Microarchitecture (MICRO), Waikiki, Hawaii, USA, December [Slides (pptx) (pdf)] [Lightning Session Slides (pptx) (pdf)] [Poster (pptx) (pdf)] [Source Code]
4
The Main Memory System Main memory is a critical component of all computing systems: server, mobile, embedded, desktop, sensor Main memory system must scale (in size, technology, efficiency, cost, and management algorithms) to maintain performance growth and technology scaling benefits Processor and caches Main Memory Storage (SSD/HDD)
5
Limits of Charge Memory
Difficult charge placement and control Flash: floating gate charge DRAM: capacitor charge, transistor leakage Reliable sensing becomes difficult as charge storage unit size reduces
6
Emerging NVM Technologies
Some emerging resistive memory technologies seem more scalable than DRAM (and they are non-volatile) Example: Phase Change Memory Data stored by changing phase of material Data read by detecting material’s resistance Expected to scale to 9nm (2022 [ITRS]) Prototyped at 20nm (Raoux+, IBM JRD 2008) Expected to be denser than DRAM: can store multiple bits/cell But, emerging technologies have (many) shortcomings Can they be enabled to replace/augment/surpass DRAM? Even if DRAM continues to scale there could be benefits to examining and enabling these new technologies.
7
Promising NVM Technologies
PCM Inject current to change material phase Resistance determined by phase STT-MRAM Inject current to change magnet polarity Resistance determined by polarity Memristors/RRAM/ReRAM Inject current to change atomic structure Resistance determined by atom distance
8
NVM as Main Memory Replacement
Very promising persistence, high capacity, OK latency, low idle power Can enable merging of memory and storage Two example works that show benefits Lee, Ipek, Mutlu, Burger, “Architecting Phase Change Memory as a Scalable DRAM Alternative,” ISCA 2009. Kultursay, Kandemir, Sivasubramaniam, Mutlu, “Evaluating STT-RAM as an Energy-Efficient Main Memory Alternative,” ISPASS 2013.
9
Two Example Works
10
Architected STT-MRAM as Main Memory
4-core, 4GB main memory, multiprogrammed workloads ~6% performance loss, ~60% energy savings vs. DRAM Kultursay+, “Evaluating STT-RAM as an Energy-Efficient Main Memory Alternative,” ISPASS 2013.
11
A More Viable Approach: Hybrid Memory Systems
CPU Meza+, “Enabling Efficient and Scalable Hybrid Memories,” IEEE Comp. Arch. Letters, 2012. Yoon+, “Row Buffer Locality Aware Caching Policies for Hybrid Memories,” ICCD 2012 Best Paper Award. DRAMCtrl NVMCtrl DRAM NVM Type X Fast, durable Small, leaky, volatile, high-cost Large, non-volatile, low-cost Slow, wears out, high active energy Hardware/software manage data allocation and movement to achieve the best of multiple technologies
12
Some Opportunities with Emerging Technologies
Merging of memory and storage e.g., a single interface to manage all data New applications e.g., ultra-fast checkpoint and restore More robust system design e.g., reducing data loss Processing tightly-coupled with memory e.g., enabling efficient search and filtering Meza+, “A Case for Efficient Hardware-Software Cooperative Management of Storage and Memory,” WEED 2013.
13
TWO-LEVEL STORAGE MODEL
CPU Ld/St VOLATILE MEMORY FAST DRAM BYTE ADDR FILE I/O STORAGE NONVOLATILE SLOW If we look at our system’s today, we have a two-level storage model where memory is directly accessed by ld/st instructions, but storage is accesses by system calls. Disk stores the persistent data, but is slow and can only be accessed as a block. So we transfer data to DRAM, which is fast and byte addressable. BLOCK ADDR
14
TWO-LEVEL STORAGE MODEL
CPU Ld/St VOLATILE MEMORY FAST DRAM BYTE ADDR NVM FILE I/O STORAGE PCM, STT-RAM NONVOLATILE SLOW However, now we have these emerging non-volatile memories which actually combine the characteristics of both memory and storage. They are fast and byte addressable like DRAM, but also can persist data as storage BLOCK ADDR Non-volatile memories combine characteristics of memory and storage
15
Coordinated Memory and Storage with NVM (I)
The traditional two-level storage model is a bottleneck with NVM Volatile data in memory a load/store interface Persistent data in storage a file system interface Problem: Operating system (OS) and file system (FS) code to locate, translate, buffer data become performance and energy bottlenecks with fast NVM stores Two-Level Store Load/Store fopen, fread, fwrite, … Virtual memory Operating system and file system Processor and caches Address translation Persistent (e.g., Phase-Change) Memory Storage (SSD/HDD) Main Memory
16
Coordinated Memory and Storage with NVM (II)
Goal: Unify memory and storage management in a single unit to eliminate wasted work to locate, transfer, and translate data Improves both energy and performance Simplifies programming model as well Unified Memory/Storage Persistent Memory Manager Processor and caches Load/Store Feedback Persistent (e.g., Phase-Change) Memory Meza+, “A Case for Efficient Hardware-Software Cooperative Management of Storage and Memory,” WEED 2013.
17
Provides an opportunity to manipulate persistent data directly
PERSISTENT MEMORY CPU Ld/St PERSISTENTMEMORY NVM NVM provides an unique opportunity to manipulate persistent data directly in the main memory. Provides an opportunity to manipulate persistent data directly
18
Performance Benefits of Persistent Memory
~24X ~5X Meza+, “A Case for Efficient Hardware-Software Cooperative Management of Storage and Memory,” WEED 2013.
19
Energy Benefits of Persistent Memory
~16X ~5X Meza+, “A Case for Efficient Hardware-Software Cooperative Management of Storage and Memory,” WEED 2013.
20
On Persistent Memory Benefits & Challenges
Justin Meza, Yixin Luo, Samira Khan, Jishen Zhao, Yuan Xie, and Onur Mutlu, "A Case for Efficient Hardware-Software Cooperative Management of Storage and Memory" Proceedings of the 5th Workshop on Energy-Efficient Design (WEED), Tel-Aviv, Israel, June Slides (pptx) Slides (pdf)
21
The Persistent Memory Manager (PMM)
Exposes a load/store interface to access persistent data Applications can directly access persistent memory no conversion, translation, location overhead for persistent data Manages data placement, location, persistence, security To get the best of multiple forms of storage Manages metadata storage and retrieval This can lead to overheads that need to be managed Exposes hooks and interfaces for system software To enable better data placement and management decisions Meza+, “A Case for Efficient Hardware-Software Cooperative Management of Storage and Memory,” WEED 2013. Let’s take a closer look at the high-level design goals for a Persistent Memory Manager, or PMM. [click] The most noticeable feature of the PMM – from the system software perspective – is the fact that it exposes a load/store interface to access all persistent data in the system. This means that applications can allocate portions of persistent memory, just as they would allocate portions of volatile DRAM memory in today’s systems. [click] Of course, with a diverse array of devices managed by a persistent memory, it will be important for the PMM to leverage each of their strengths – and mitigate each of their weaknesses – as much as possible. To this end, the PMM is responsible for managing how volatile and persistent are placed among devices, to ensure data can be efficiently located, persistence guarantees met, and any security rules enforced. [click] The PMM is also responsible for managing metadata storage and retrieval. We will later evaluate how such overheads affect system performance. [click] In addition, to achieve the best performance and energy efficiency for the applications running on a system, it will be necessary for the system software to provide hints to the hardware to be used to make more efficient data placement decisions.
22
The Persistent Memory Manager (PMM)
Persistent objects And to tie everything together, here is how a persistent memory would operate. Code allocates persistent objects and accesses them using loads and stores – no different than persistent memory in today’s machines. Optionally, the software can provide hints as to the types of access patterns or requirements of the software. Then, a Persistent Memory Manager uses this access information and software hints to allocate, locate, migrate, and access data among the heterogeneous array of devices present on the system, attempting to strike a balance between application-level guarantees, like persistence and security, and performance and energy efficiency. PMM uses access and hint information to allocate, locate, migrate and access data in the heterogeneous array of devices
23
One Key Challenge How to ensure consistency of system/data if all memory is persistent? Two extremes Programmer transparent: Let the system handle it Programmer only: Let the programmer handle it Many alternatives in-between…
24
CHALLENGE: CRASH CONSISTENCY
However, . ensuring data consistency is a challenge in persistent memory systems, a power failure may cause permanent data corruption in NVM, leading to an inconsistent memory state Persistent Memory System System crash can result in permanent data corruption in NVM
25
CURRENT SOLUTIONS Burden on the programmers
Explicit interfaces to manage consistency NV-Heaps [ASPLOS’11], BPFS [SOSP’09], Mnemosyne [ASPLOS’11] AtomicBegin { Insert a new node; } AtomicEnd; Limits adoption of NVM Have to rewrite code with clear partition between volatile and non-volatile data Prior works proposed to use transactional interfaces to protect data from inconsistency. But it limits the adoption of NVM as the programmers need to rewrite the code with a clear partition between volatile and nonvolatile data, which is a burden on the general programmers. Burden on the programmers
26
Software-transparent crash consistency in persistent memory systems
OUR APPROACH: ThyNVM Goal: Software-transparent crash consistency in persistent memory systems
27
ThyNVM: Summary A new hardware-based checkpointing mechanism
Checkpoints at multiple granularities to reduce both checkpointing latency and metadata overhead Overlaps checkpointing and execution to reduce checkpointing latency Adapts to DRAM and NVM characteristics Performs within 4.9% of an idealized DRAM with zero cost consistency
28
OUTLINE Crash Consistency Problem Current Solutions ThyNVM Evaluation
Here is the outline of the rest of the talk. Next I will talk about the crash consistency problem in detail and show the challenges of the current solutions. Then I will present our design ThyNVM, show the results and finally will conclude. Conclusion
29
CRASH CONSISTENCY PROBLEM
Add a node to a linked list 2. Link to prev 1. Link to next Let’s see an example of crash consistency problem in persistent memory systems. When we add a node in the linked list, there are two operations, first, we need to link the new node to next node and then link the previous node to the new node. Unfortunately due to the write ordering, we cannot guarantee which write will reach the memory first. If there is a power failure, depending on which write persisted, we might have a broken linked list. As a result, after a crash we can have an inconsistent memory state. System crash can result in inconsistent memory state
30
OUTLINE Crash Consistency Problem Current Solutions ThyNVM Evaluation
So how did the prior works solved the crash consistency problem? Conclusion
31
update a node in a persistent hash table
CURRENT SOLUTIONS Explicit interfaces to manage consistency NV-Heaps [ASPLOS’11], BPFS [SOSP’09], Mnemosyne [ASPLOS’11] Example Code update a node in a persistent hash table void hashtable_update(hashtable_t* ht, void *key, void *data) { list_t* chain = get_chain(ht, key); pair_t* pair; pair_t updatePair; updatePair.first = key; pair = (pair_t*) list_find(chain, &updatePair); pair->second = data; } Current solution propose to use explicit interfaces to protect data from inconsistency. Let’s see a real code that updates a node in a persistent hash table.
32
CURRENT SOLUTIONS void TMhashtable_update(TMARCGDECL
hashtable_t* ht, void *key, void*data){ list_t* chain = get_chain(ht, key); pair_t* pair; pair_t updatePair; updatePair.first = key; pair = (pair_t*) TMLIST_FIND(chain, &updatePair); pair->second = data; } The code looks like this after we use the transactional interface. However, there are several issues in the code.
33
Manual declaration of persistent components
CURRENT SOLUTIONS Manual declaration of persistent components void TMhashtable_update(TMARCGDECL hashtable_t* ht, void *key, void*data){ list_t* chain = get_chain(ht, key); pair_t* pair; pair_t updatePair; updatePair.first = key; pair = (pair_t*) TMLIST_FIND(chain, &updatePair); pair->second = data; } First, the programmer has to manually declare the persistent object.
34
Manual declaration of persistent components Need a new implementation
CURRENT SOLUTIONS Manual declaration of persistent components void TMhashtable_update(TMARCGDECL hashtable_t* ht, void *key, void*data){ list_t* chain = get_chain(ht, key); pair_t* pair; pair_t updatePair; updatePair.first = key; pair = (pair_t*) TMLIST_FIND(chain, &updatePair); pair->second = data; } Need a new implementation Second, the programmers need to rewrite the functions so that they correctly manipulate the persistent data
35
Manual declaration of persistent components Need a new implementation
CURRENT SOLUTIONS Manual declaration of persistent components void TMhashtable_update(TMARCGDECL hashtable_t* ht, void *key, void*data){ list_t* chain = get_chain(ht, key); pair_t* pair; pair_t updatePair; updatePair.first = key; pair = (pair_t*) TMLIST_FIND(chain, &updatePair); pair->second = data; } Need a new implementation Third, third party code needs to be consistent Third party code can be inconsistent
36
CURRENT SOLUTIONS Burden on the programmers
Manual declaration of persistent components void TMhashtable_update(TMARCGDECL hashtable_t* ht, void *key, void*data){ list_t* chain = get_chain(ht, key); pair_t* pair; pair_t updatePair; updatePair.first = key; pair = (pair_t*) TMLIST_FIND(chain, &updatePair); pair->second = data; } Need a new implementation Programmers need to make sure there is no volatile to non-volatile pointers. Clearly writing code for persistent memory is a burden on the general programmers. Prohibited Operation Third party code can be inconsistent Burden on the programmers
37
OUTLINE Crash Consistency Problem Current Solutions ThyNVM Evaluation
In this work we present thynvm Conclusion
38
Software transparent consistency in persistent memory systems
OUR GOAL Software transparent consistency in persistent memory systems Execute legacy applications Reduce burden on programmers Enable easier integration of NVM The goal of thynvm is to provide software transparent memory consistent such that we can execute legacy applications, reduce the burden on the programmers and enable easier integration of NVM
39
NO MODIFICATION IN THE CODE
void hashtable_update(hashtable_t* ht, void *key, void *data) { list_t* chain = get_chain(ht, key); pair_t* pair; pair_t updatePair; updatePair.first = key; pair = (pair_t*) list_find(chain, &updatePair); pair->second = data; } So we will take the exact same code
40
RUN THE EXACT SAME CODE…
void hashtable_update(hashtable_t* ht, void *key, void *data){ list_t* chain = get_chain(ht, key); pair_t* pair; pair_t updatePair; updatePair.first = key; pair = (pair_t*) list_find(chain, &updatePair); pair->second = data; } Persistent Memory System And run that in a persistent memory system, and thynvm will provide memory crash consistency Software transparent memory crash consistency
41
Periodic checkpointing of data Transparent to application and system
ThyNVM APPROACH Periodic checkpointing of data managed by hardware time Running Checkpointing Running Checkpointing Epoch 0 Epoch 1 Thynvm provides this consistency support by taking periodic checkpoint of data at hardware. So if there is a crash in the system, it rolls back to the last checkpointed state. Transparent to application and system
42
CHECKPOINTING OVERHEAD
1. Metadata overhead Metadata Table Working location Checkpoint location X X’ Y Y’ time Running Checkpointing Running Checkpointing STALLED STALLED However, we notice that there are two major overhead of checkpointing. First, we need to track the location of current working copy and the last checkpoint copy and this metadata can have a huge space overhead Second, during the checkpoint, the programs get stalled, so the checkpointing latency can have a performance impact on the applications. We make two new observations to solve these challenges. Epoch 0 Epoch 1 2. Checkpointing latency
43
1. METADATA AND CHECKPOINTING GRANULARITY
Working location Checkpoint location X X’ Y Y’ PAGE CACHE BLOCK First, we observe that there is a trade off between the metadata overhead and checkpointing granularity. If we checkpoint at page granularity, we need one entry per page, so it will have s small metadata overhead. If we checkpoint at block granularity, we will need one entry per block leading to a huge metadata overhead PAGE GRANULARITY BLOCK GRANULARITY One Entry Per Page Small Metadata One Entry Per Block Huge Metadata
44
Long latency of writing back data to NVM
2. LATENCY AND LOCATION DRAM-BASED WRITEBACK W 2. Update the metadata table Working location Checkpoint location X X’ 1. Writeback data from DRAM DRAM NVM Second, we observe that there is a relation between the location of data and checkpoint latency. We can have the working copy either in DRAM on NVM. Let’s look at DRAM first. Checkpointing data when it is stored in DRAM has two steps. Since data in DRAM is not persistent, any working copy needs to be written back to NVM during checkpointing time to make it persistent. Then we need to update the meta table with the location of the checkpoint copy. So there is a long latency of writing back data from DRAM to NVM Long latency of writing back data to NVM
45
Short latency in NVM-based remapping
2. LATENCY AND LOCATION NVM-BASED REMAPPING 2. Update the metadata table Working location Checkpoint location Y X 3. Write in a new location No copying of data DRAM NVM However, if the current working copy of the data is stored in NVM, it is already persistent and there is no need to copy data during the checkpointing time. We only need to persistent the location of the data in the metadata table and when we there is a new write to that location, we can just remap the write to a new location in NVM. Since there is no copy involved, NVM-based remapping has a short latency. Short latency in NVM-based remapping
46
ThyNVM KEY MECHANISMS Checkpointing granularity Latency and location
Small granularity: large metadata Large granularity: small metadata Latency and location Writeback from DRAM: long latency Remap in NVM: short latency Based on these, we propose two key mechanisms So to summarize, small granularity checkpointing leads to large metadata and large granularity checkpointing leads to small metadata overhead. On the other hand, writeback from DRAM introduces long stall time but remapping in NVM is very fast. Based on these two observations, ThyNVM has two key mechanisms to reduce metadata overhead and checkpointing latency, Dual granularity checkpointing Overlapping of execution and checkpointing 1. Dual granularity checkpointing 2. Overlap of execution and checkpointing
47
1. DUAL GRANULARITY CHECKPOINTING
Page Writeback in DRAM Block Remapping in NVM DRAM NVM GOOD FOR STREAMING WRITES GOOD FOR RANDOM WRITES For dual granularity checkpointing, thynvm uses page granularity writeback scheme for DRAM and block remapping scheme for NVM. Thynvm places pages with high write locality in DRAM and low locality writes in NVM. Placing high write locality pages in DRAM is good for streaming writes, As DRAM coalesces the writes and minimizes the write traffic during the checkpointing. On the other hand, nvm just remaps blocks and is good for random writes, if these pages were placed in DRAM,DRAM had to writeback whole pages with only one or two updates. High write locality pages in DRAM, low write locality pages in NVM
48
TRADEOFF SPACE
49
2. OVERLAPPING CHECKPOINTING AND EXECUTION
time Running Checkpointing Running Checkpointing Epoch 0 Epoch 1 time Epoch 0 Epoch 1 Epoch 2 Running Checkpointing Even with the dual granularity checkpointing, checkpointing latency is dominated by the slow DRAM write back. In order to reduce the latency we propose to overlap checkpointing with execution. Instead of stalling when DRAM pages are getting written back to memory, we let the application start the next epoch. This hides the long latency of page writeback scheme. Hides the long latency of Page Writeback
50
OUTLINE Crash Consistency Problem Current Solutions ThyNVM Evaluation
Now that we have described our key mechanisms let’s take a look at the results Conclusion
51
SYSTEM ARCHITECTURE
52
MEMORY ADDRESS SPACE
53
METHODOLOGY Cycle accurate x86 simulator Gem5 Comparison Points:
Ideal DRAM: DRAM-based, no cost for consistency Lowest latency system Ideal NVM: NVM-based, no cost for consistency NVM has higher latency than DRAM Journaling: Hybrid, commit dirty cache blocks Leverages DRAM to buffer dirty blocks Shadow Paging: Hybrid, copy-on-write pages Leverages DRAM to buffer dirty pages We have evaluated thynvm using cycle accurate x86 simulator GEM5. We compare thynvm with four different systems. Ideal DRAM is a DRAM only system that provides crash consistency support at no cost. As DRAM has lower latency NVM, this is the lowest latency system. Ideal NVM is an NVM-only system with zero cost for consistency. This system will have higher latency than ideal DRAM. We also compare thynvm with journaling and shadow paging. Both of them has both DRAM and NVM, similar to ThyNVM. Journaling commits updates in cache block granularity and shadow paging uses copy-on-write at page granularity.
54
ADAPTIVITY TO ACCESS PATTERN
RANDOM SEQUENTIAL BETTER LOW LOW LOW LOW So let’s see how good thynvm can adapt to access patterns. We evaluate two microbenchmarks with random and sequential access patterns. In the x axis we compare thynvm with journaling and shadow paging. In the y axis we have write traffic normalized to journaling, so lower is better. If we compare jounaling and shaow paging, jounaling is Journaling is better for Random and Shadow paging is better for Sequential ThyNVM adapts to both access patterns
55
OVERLAPPING CHECKPOINTING AND EXECUTION
RANDOM SEQUENTIAL BETTER 45% 35% Can spend 35-45% of the execution on checkpointing ThyNVM spends only 2.4%/5.5% of the execution time on checkpointing Stalls the application for a negligible time
56
PERFORMANCE OF LEGACY CODE
Within -4.9%/+2.7% of an idealized DRAM/NVM system Provides consistency without significant performance overhead
57
KEY-VALUE STORE TX THROUGHPUT
Storage throughput close to Ideal DRAM
58
OUTLINE Crash Consistency Problem Current Solutions ThyNVM Evaluation
Conclusion
59
with no programming effort
ThyNVM A new hardware-based checkpointing mechanism, with no programming effort Checkpoints at multiple granularities to minimize both latency and metadata Overlaps checkpointing and execution Adapts to DRAM and NVM characteristics Can enable widespread adoption of persistent memory
60
Source Code and More Available at
ThyNVM Enabling Software-transparent Crash Consistency In Persistent Memory Systems
61
Our Other FMS 2016 Talks "A Large-Scale Study of Flash Memory Errors in the Field” Onur Mutlu (ETH Zurich & CMU) August 3:50pm Study of flash-based SSD errors in Facebook data centers over the course of 4 years First large-scale field study of flash memory reliability Forum F-22: SSD Testing (Testing Track) Practical Threshold Voltage Distribution Modeling Yixin Luo (CMU PhD Student) August 4:20pm Forum E-22: Controllers and Flash Technology "WARM: Improving NAND Flash Memory Lifetime with Write-hotness Aware Retention Management” Saugata Ghose (CMU Researcher) August 5:45pm Forum C-22: SSD Concepts (SSDs Track)
62
Referenced Papers and Talks
All are available at And, many other previous works on NVM & Persistent Memory DRAM Hybrid memories NAND flash memory
63
Feel free to email me with any questions & feedback
Thank you. Feel free to me with any questions & feedback
64
ThyNVM Software-Transparent Crash Consistency for Persistent Memory
Onur Mutlu (joint work with Jinglei Ren, Jishen Zhao, Samira Khan, Jongmoo Choi, Yongwei Wu) August 8, 2016 Flash Memory Summit 2016, Santa Clara, CA
65
References to Papers and Talks
66
Challenges and Opportunities in Memory
Onur Mutlu, "Rethinking Memory System Design" Keynote talk at 2016 ACM SIGPLAN International Symposium on Memory Management (ISMM), Santa Barbara, CA, USA, June [Slides (pptx) (pdf)] [Abstract] Onur Mutlu and Lavanya Subramanian, "Research Problems and Opportunities in Memory Systems" Invited Article in Supercomputing Frontiers and Innovations (SUPERFRI), 2015.
67
Phase Change Memory As DRAM Replacement
Benjamin C. Lee, Engin Ipek, Onur Mutlu, and Doug Burger, "Architecting Phase Change Memory as a Scalable DRAM Alternative" Proceedings of the 36th International Symposium on Computer Architecture (ISCA), pages 2-13, Austin, TX, June Slides (pdf) Benjamin C. Lee, Ping Zhou, Jun Yang, Youtao Zhang, Bo Zhao, Engin Ipek, Onur Mutlu, and Doug Burger, "Phase Change Technology and the Future of Main Memory" IEEE Micro, Special Issue: Micro's Top Picks from 2009 Computer Architecture Conferences (MICRO TOP PICKS), Vol. 30, No. 1, pages 60-70, January/February 2010.
68
STT-MRAM As DRAM Replacement
Emre Kultursay, Mahmut Kandemir, Anand Sivasubramaniam, and Onur Mutlu, "Evaluating STT-RAM as an Energy-Efficient Main Memory Alternative" Proceedings of the 2013 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS), Austin, TX, April Slides (pptx) (pdf)
69
Taking Advantage of Persistence in Memory
Justin Meza, Yixin Luo, Samira Khan, Jishen Zhao, Yuan Xie, and Onur Mutlu, "A Case for Efficient Hardware-Software Cooperative Management of Storage and Memory" Proceedings of the 5th Workshop on Energy-Efficient Design (WEED), Tel-Aviv, Israel, June Slides (pptx) Slides (pdf) Jinglei Ren, Jishen Zhao, Samira Khan, Jongmoo Choi, Yongwei Wu, and Onur Mutlu, "ThyNVM: Enabling Software-Transparent Crash Consistency in Persistent Memory Systems" Proceedings of the 48th International Symposium on Microarchitecture (MICRO), Waikiki, Hawaii, USA, December [Slides (pptx) (pdf)] [Lightning Session Slides (pptx) (pdf)] [Poster (pptx) (pdf)] [Source Code]
70
Hybrid DRAM + NVM Systems (I)
HanBin Yoon, Justin Meza, Rachata Ausavarungnirun, Rachael Harding, and Onur Mutlu, "Row Buffer Locality Aware Caching Policies for Hybrid Memories" Proceedings of the 30th IEEE International Conference on Computer Design (ICCD), Montreal, Quebec, Canada, September Slides (pptx) (pdf) Best paper award (in Computer Systems and Applications track). Justin Meza, Jichuan Chang, HanBin Yoon, Onur Mutlu, and Parthasarathy Ranganathan, "Enabling Efficient and Scalable Hybrid Memories Using Fine-Granularity DRAM Cache Management" IEEE Computer Architecture Letters (CAL), February 2012.
71
Hybrid DRAM + NVM Systems (II)
Dongwoo Kang, Seungjae Baek, Jongmoo Choi, Donghee Lee, Sam H. Noh, and Onur Mutlu, "Amnesic Cache Management for Non-Volatile Memory" Proceedings of the 31st International Conference on Massive Storage Systems and Technologies (MSST), Santa Clara, CA, June [Slides (pdf)]
72
NVM Design and Architecture
HanBin Yoon, Justin Meza, Naveen Muralimanohar, Norman P. Jouppi, and Onur Mutlu, "Efficient Data Mapping and Buffering Techniques for Multi-Level Cell Phase-Change Memories" ACM Transactions on Architecture and Code Optimization (TACO), Vol. 11, No. 4, December [Slides (ppt) (pdf)] Presented at the 10th HiPEAC Conference, Amsterdam, Netherlands, January [Slides (ppt) (pdf)] Justin Meza, Jing Li, and Onur Mutlu, "Evaluating Row Buffer Locality in Future Non-Volatile Main Memories" SAFARI Technical Report, TR-SAFARI , Carnegie Mellon University, December 2012.
73
Our FMS Talks and Posters
Onur Mutlu, ThyNVM: Software-Transparent Crash Consistency for Persistent Memory, FMS 2016. Onur Mutlu, Large-Scale Study of In-the-Field Flash Failures, FMS 2016. Yixin Luo, Practical Threshold Voltage Distribution Modeling, FMS 2016. Saugata Ghose, Write-hotness Aware Retention Management, FMS 2016. Onur Mutlu, Read Disturb Errors in MLC NAND Flash Memory, FMS 2015. Yixin Luo, Data Retention in MLC NAND Flash Memory, FMS 2015. Onur Mutlu, Error Analysis and Management for MLC NAND Flash Memory, FMS 2014. FMS 2016 posters: WARM: Improving NAND Flash Memory Lifetime with Write-hotness Aware Retention Management Read Disturb Errors in MLC NAND Flash Memory Data Retention in MLC NAND Flash Memory
74
Our Flash Memory Works (I)
Retention noise study and management Yu Cai, Gulay Yalcin, Onur Mutlu, Erich F. Haratsch, Adrian Cristal, Osman Unsal, and Ken Mai, Flash Correct-and-Refresh: Retention-Aware Error Management for Increased Flash Memory Lifetime, ICCD 2012. Yu Cai, Yixin Luo, Erich F. Haratsch, Ken Mai, and Onur Mutlu, Data Retention in MLC NAND Flash Memory: Characterization, Optimization and Recovery, HPCA 2015. Yixin Luo, Yu Cai, Saugata Ghose, Jongmoo Choi, and Onur Mutlu, WARM: Improving NAND Flash Memory Lifetime with Write-hotness Aware Retention Management, MSST 2015. Flash-based SSD prototyping and testing platform Yu Cai, Erich F. Haratsh, Mark McCartney, Ken Mai, FPGA-based solid-state drive prototyping platform, FCCM 2011.
75
Our Flash Memory Works (II)
Overall flash error analysis Yu Cai, Erich F. Haratsch, Onur Mutlu, and Ken Mai, Error Patterns in MLC NAND Flash Memory: Measurement, Characterization, and Analysis, DATE Yu Cai, Gulay Yalcin, Onur Mutlu, Erich F. Haratsch, Adrian Cristal, Osman Unsal, and Ken Mai, Error Analysis and Retention-Aware Error Management for NAND Flash Memory, ITJ 2013. Program and erase noise study Yu Cai, Erich F. Haratsch, Onur Mutlu, and Ken Mai, Threshold Voltage Distribution in MLC NAND Flash Memory: Characterization, Analysis and Modeling, DATE 2013.
76
Our Flash Memory Works (III)
5. Cell-to-cell interference characterization and tolerance Yu Cai, Onur Mutlu, Erich F. Haratsch, and Ken Mai, Program Interference in MLC NAND Flash Memory: Characterization, Modeling, and Mitigation, ICCD Yu Cai, Gulay Yalcin, Onur Mutlu, Erich F. Haratsch, Osman Unsal, Adrian Cristal, and Ken Mai, Neighbor-Cell Assisted Error Correction for MLC NAND Flash Memories, SIGMETRICS 2014. 6. Read disturb noise study Yu Cai, Yixin Luo, Saugata Ghose, Erich F. Haratsch, Ken Mai, and Onur Mutlu, Read Disturb Errors in MLC NAND Flash Memory: Characterization and Mitigation, DSN 2015.
77
Our Flash Memory Works (IV)
7. Flash errors in the field Justin Meza, Qiang Wu, Sanjeev Kumar, and Onur Mutlu, A Large-Scale Study of Flash Memory Errors in the Field, SIGMETRICS 2015. 8. Persistent memory Jinglei Ren, Jishen Zhao, Samira Khan, Jongmoo Choi, Yongwei Wu, and Onur Mutlu, ThyNVM: Enabling Software-Transparent Crash Consistency in Persistent Memory Systems, MICRO 2015.
78
Referenced Papers and Talks
All are available at And, many other previous works on NAND flash memory errors and management
79
Related Videos and Course Materials
Undergraduate Computer Architecture Course Lecture Videos (2013, 2014, 2015) Undergraduate Computer Architecture Course Materials (2013, 2014, 2015) Graduate Computer Architecture Lecture Videos (2013, 2015) Parallel Computer Architecture Course Materials (Lecture Videos) Memory Systems Short Course Materials (Lecture Video on Main Memory and DRAM Basics)
80
Additional Slides on Persistent Memory and NVM
81
Phase Change Memory: Pros and Cons
Pros over DRAM Better technology scaling (capacity and cost) Non volatility Low idle power (no refresh) Cons Higher latencies: ~4-15x DRAM (especially write) Higher active energy: ~2-50x DRAM (especially write) Lower endurance (a cell dies after ~108 writes) Reliability issues (resistance drift) Challenges in enabling PCM as DRAM replacement/helper: Mitigate PCM shortcomings Find the right way to place PCM in the system
82
PCM-based Main Memory (I)
How should PCM-based (main) memory be organized? Hybrid PCM+DRAM [Qureshi+ ISCA’09, Dhiman+ DAC’09]: How to partition/migrate data between PCM and DRAM
83
PCM-based Main Memory (II)
How should PCM-based (main) memory be organized? Pure PCM main memory [Lee et al., ISCA’09, Top Picks’10]: How to redesign entire hierarchy (and cores) to overcome PCM shortcomings
84
An Initial Study: Replace DRAM with PCM
Lee, Ipek, Mutlu, Burger, “Architecting Phase Change Memory as a Scalable DRAM Alternative,” ISCA 2009. Surveyed prototypes from (e.g. IEDM, VLSI, ISSCC) Derived “average” PCM parameters for F=90nm
85
Results: Naïve Replacement of DRAM with PCM
Replace DRAM with PCM in a 4-core, 4MB L2 system PCM organized the same as DRAM: row buffers, banks, peripherals 1.6x delay, 2.2x energy, 500-hour average lifetime Lee, Ipek, Mutlu, Burger, “Architecting Phase Change Memory as a Scalable DRAM Alternative,” ISCA 2009.
86
Results: Architected PCM as Main Memory
1.2x delay, 1.0x energy, 5.6-year average lifetime Scaling improves energy, endurance, density Caveat 1: Worst-case lifetime is much shorter (no guarantees) Caveat 2: Intensive applications see large performance and energy hits Caveat 3: Optimistic PCM parameters?
87
A More Viable Approach: Hybrid Memory Systems
CPU Meza+, “Enabling Efficient and Scalable Hybrid Memories,” IEEE Comp. Arch. Letters, 2012. Yoon+, “Row Buffer Locality Aware Caching Policies for Hybrid Memories,” ICCD 2012 Best Paper Award. DRAMCtrl PCM Ctrl DRAM Phase Change Memory (or Tech. X) Fast, durable Small, leaky, volatile, high-cost Large, non-volatile, low-cost Slow, wears out, high active energy Hardware/software manage data allocation and movement to achieve the best of multiple technologies
88
Data Placement Between DRAM and PCM
Idea: Characterize data access patterns and guide data placement in hybrid memory Streaming accesses: As fast in PCM as in DRAM Random accesses: Much faster in DRAM Idea: Place random access data with some reuse in DRAM; streaming data in PCM Yoon+, “Row Buffer Locality-Aware Data Placement in Hybrid Memories,” ICCD 2012 Best Paper Award.
89
Hybrid vs. All-PCM/DRAM [ICCD’12]
29% 31% Now let’s compare our mechanism to a homogeneous memory system. Here the comparison points are a 16GB all-PCM and all-DRAM memory system. Our mechanism achieves 31% better performance than the all-PCM memory system, and comes within 29% of the all-DRAM performance. Note that we do not assume any capacity benefits of PCM over DRAM. And yet, our mechanism bridges the performance gap between the two homogeneous memory systems. If the capacity benefit of PCM was included, the hybrid memory system will perform even better. 31% better performance than all PCM, within 29% of all DRAM performance Yoon+, “Row Buffer Locality-Aware Data Placement in Hybrid Memories,” ICCD 2012 Best Paper Award.
90
STT-MRAM as Main Memory
Magnetic Tunnel Junction (MTJ) device Reference layer: Fixed magnetic orientation Free layer: Parallel or anti-parallel Magnetic orientation of the free layer determines logical state of device High vs. low resistance Write: Push large current through MTJ to change orientation of free layer Read: Sense current flow Kultursay et al., “Evaluating STT-RAM as an Energy-Efficient Main Memory Alternative,” ISPASS 2013. Reference Layer Free Layer Barrier Logical 0 Logical 1 Moving from DRAM to STT-RAM, instead of a capacitor that stores charge, now we have a magnetic tunnel junction that has one reference layer with a fixed magnetic orientation, and a free layer that can be parallel or anti parallel to the reference layer; which determines the resistance, and in turn, the data stored in the MTJ. In addition to the MTJ, the cell, again, has an access transistor. Reading requires a small voltage to be applied across the bitline and sense line; and the current is sensed. This current varies depending on the resistance of MTJ. To write data, we push a large current into the MTJ, which re-aligns the free layer. The direction of the current determines whether it becomes parallel or anti-parallel; i.e. logical 0 or 1. Word Line Bit Line Access Transistor MTJ Sense Line
91
STT-MRAM: Pros and Cons
Pros over DRAM Better technology scaling Non volatility Low idle power (no refresh) Cons Higher write latency Higher write energy Reliability? Another level of freedom Can trade off non-volatility for lower write latency/energy (by reducing the size of the MTJ)
92
Architected STT-MRAM as Main Memory
4-core, 4GB main memory, multiprogrammed workloads ~6% performance loss, ~60% energy savings vs. DRAM Kultursay+, “Evaluating STT-RAM as an Energy-Efficient Main Memory Alternative,” ISPASS 2013.
93
Other Opportunities with Emerging Technologies
Merging of memory and storage e.g., a single interface to manage all data New applications e.g., ultra-fast checkpoint and restore More robust system design e.g., reducing data loss Processing tightly-coupled with memory e.g., enabling efficient search and filtering
94
Coordinated Memory and Storage with NVM (I)
The traditional two-level storage model is a bottleneck with NVM Volatile data in memory a load/store interface Persistent data in storage a file system interface Problem: Operating system (OS) and file system (FS) code to locate, translate, buffer data become performance and energy bottlenecks with fast NVM stores Two-Level Store Load/Store fopen, fread, fwrite, … Virtual memory Operating system and file system Processor and caches Address translation Persistent (e.g., Phase-Change) Memory Storage (SSD/HDD) Main Memory
95
Coordinated Memory and Storage with NVM (II)
Goal: Unify memory and storage management in a single unit to eliminate wasted work to locate, transfer, and translate data Improves both energy and performance Simplifies programming model as well Unified Memory/Storage Persistent Memory Manager Processor and caches Load/Store Feedback Persistent (e.g., Phase-Change) Memory Meza+, “A Case for Efficient Hardware-Software Cooperative Management of Storage and Memory,” WEED 2013.
96
The Persistent Memory Manager (PMM)
Exposes a load/store interface to access persistent data Applications can directly access persistent memory no conversion, translation, location overhead for persistent data Manages data placement, location, persistence, security To get the best of multiple forms of storage Manages metadata storage and retrieval This can lead to overheads that need to be managed Exposes hooks and interfaces for system software To enable better data placement and management decisions Meza+, “A Case for Efficient Hardware-Software Cooperative Management of Storage and Memory,” WEED 2013. Let’s take a closer look at the high-level design goals for a Persistent Memory Manager, or PMM. [click] The most noticeable feature of the PMM – from the system software perspective – is the fact that it exposes a load/store interface to access all persistent data in the system. This means that applications can allocate portions of persistent memory, just as they would allocate portions of volatile DRAM memory in today’s systems. [click] Of course, with a diverse array of devices managed by a persistent memory, it will be important for the PMM to leverage each of their strengths – and mitigate each of their weaknesses – as much as possible. To this end, the PMM is responsible for managing how volatile and persistent are placed among devices, to ensure data can be efficiently located, persistence guarantees met, and any security rules enforced. [click] The PMM is also responsible for managing metadata storage and retrieval. We will later evaluate how such overheads affect system performance. [click] In addition, to achieve the best performance and energy efficiency for the applications running on a system, it will be necessary for the system software to provide hints to the hardware to be used to make more efficient data placement decisions.
97
The Persistent Memory Manager (PMM)
Persistent objects And to tie everything together, here is how a persistent memory would operate. Code allocates persistent objects and accesses them using loads and stores – no different than persistent memory in today’s machines. Optionally, the software can provide hints as to the types of access patterns or requirements of the software. Then, a Persistent Memory Manager uses this access information and software hints to allocate, locate, migrate, and access data among the heterogeneous array of devices present on the system, attempting to strike a balance between application-level guarantees, like persistence and security, and performance and energy efficiency. PMM uses access and hint information to allocate, locate, migrate and access data in the heterogeneous array of devices
98
Performance Benefits of a Single-Level Store
~24X ~5X Meza+, “A Case for Efficient Hardware-Software Cooperative Management of Storage and Memory,” WEED 2013.
99
Energy Benefits of a Single-Level Store
~16X ~5X Meza+, “A Case for Efficient Hardware-Software Cooperative Management of Storage and Memory,” WEED 2013.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.