LLNL-PRES-482473 Lawrence Livermore National Laboratory, P. O. Box 808, Livermore, CA 94551 This work performed under the auspices of the U.S. Department.

Slides:



Advertisements
Similar presentations
RAID Redundant Array of Inexpensive Disks Presented by Greg Briggs.
Advertisements

1 Lecture 18: RAID n I/O bottleneck n JBOD and SLED n striping and mirroring n classic RAID levels: 1 – 5 n additional RAID levels: 6, 0+1, 10 n RAID usage.
CSCE430/830 Computer Architecture
© 2005 Dorian C. Arnold Reliability in Tree-based Overlay Networks Dorian C. Arnold University of Wisconsin Paradyn/Condor Week March 14-18, 2005 Madison,
Enhanced Availability With RAID CC5493/7493. RAID Redundant Array of Independent Disks RAID is implemented to improve: –IO throughput (speed) and –Availability.
Chapter 3 Presented by: Anupam Mittal.  Data protection: Concept of RAID and its Components Data Protection: RAID - 2.
Chapter 5: Server Hardware and Availability. Hardware Reliability and LAN The more reliable a component, the more expensive it is. Server hardware is.
Availability in Globally Distributed Storage Systems
Spark: Cluster Computing with Working Sets
Disk Scrubbing in Large Archival Storage Systems Thomas Schwarz, S.J. 1,2 Qin Xin 1,3, Ethan Miller 1, Darrell Long 1, Andy Hospodor 1,2, Spencer Ng 3.
Typhoon: An Ultra-Available Archive and Backup System Utilizing Linear-Time Erasure Codes.
Lecture 17 I/O Optimization. Disk Organization Tracks: concentric rings around disk surface Sectors: arc of track, minimum unit of transfer Cylinder:
High Performance Computing Course Notes High Performance Storage.
1 stdchk : A Checkpoint Storage System for Desktop Grid Computing Matei Ripeanu – UBC Sudharshan S. Vazhkudai – ORNL Abdullah Gharaibeh – UBC The University.
I/O Systems and Storage Systems May 22, 2000 Instructor: Gary Kimura.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
Gordon: Using Flash Memory to Build Fast, Power-efficient Clusters for Data-intensive Applications A. Caulfield, L. Grupp, S. Swanson, UCSD, ASPLOS’09.
Storage System: RAID Questions answered in this lecture: What is RAID? How does one trade-off between: performance, capacity, and reliability? What is.
Redundant Array of Inexpensive Disks (RAID). Redundant Arrays of Disks Files are "striped" across multiple spindles Redundancy yields high data availability.
Distributed Systems Early Examples. Projects NOW – a Network Of Workstations University of California, Berkely Terminated about 1997 after demonstrating.
DISKS IS421. DISK  A disk consists of Read/write head, and arm  A platter is divided into Tracks and sector  The R/W heads can R/W at the same time.
Redundant Array of Independent Disks
RAID: High-Performance, Reliable Secondary Storage Mei Qing & Chaoxia Liao Nov. 20, 2003.
Day 10 Hardware Fault Tolerance RAID. High availability All servers should be on UPSs –2 Types Smart UPS –Serial cable connects from UPS to computer.
Tree-Based Density Clustering using Graphics Processors
Tanzima Z. Islam, Saurabh Bagchi, Rudolf Eigenmann – Purdue University Kathryn Mohror, Adam Moody, Bronis R. de Supinski – Lawrence Livermore National.
RAID COP 5611 Advanced Operating Systems Adapted from Andy Wang’s slides at FSU.
Lecture 9 of Advanced Databases Storage and File Structure (Part II) Instructor: Mr.Ahmed Al Astal.
Redundant Array of Inexpensive Disks aka Redundant Array of Independent Disks (RAID) Modified from CCT slides.
LLNL-PRES-XXXXXX This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Chapter 3: Operating-System Structures System Components Operating System Services System Calls System Programs System Structure Virtual Machines System.
© Pearson Education Limited, Chapter 16 Physical Database Design – Step 7 (Monitor and Tune the Operational System) Transparencies.
Hadoop Hardware Infrastructure considerations ©2013 OpalSoft Big Data.
MonetDB/X100 hyper-pipelining query execution Peter Boncz, Marcin Zukowski, Niels Nes.
A Framework for Elastic Execution of Existing MPI Programs Aarthi Raveendran Tekin Bicer Gagan Agrawal 1.
The Red Storm High Performance Computer March 19, 2008 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
A Framework for Elastic Execution of Existing MPI Programs Aarthi Raveendran Graduate Student Department Of CSE 1.
Building a Parallel File System Simulator E Molina-Estolano, C Maltzahn, etc. UCSC Lab, UC Santa Cruz. Published in Journal of Physics, 2009.
Lawrence Livermore National Laboratory Greg Bronevetsky in collaboration with Ignacio Laguna, Saurabh Bagchi, Bronis R. de Supinski, Dong H. Ahn, and Martin.
1/20 Optimization of Multi-level Checkpoint Model for Large Scale HPC Applications Sheng Di, Mohamed Slim Bouguerra, Leonardo Bautista-gomez, Franck Cappello.
Fast Crash Recovery in RAMCloud. Motivation The role of DRAM has been increasing – Facebook used 150TB of DRAM For 200TB of disk storage However, there.
CS 153 Design of Operating Systems Spring 2015 Lecture 22: File system optimizations.
Next Generation Operating Systems Zeljko Susnjar, Cisco CTG June 2015.
CS 591 x I/O in MPI. MPI exists as many different implementations MPI implementations are based on MPI standards MPI standards are developed and maintained.
COSC 3330/6308 Solutions to the Third Problem Set Jehan-François Pâris November 2012.
RAID Disk Arrays Hank Levy. 212/5/2015 Basic Problems Disks are improving, but much less fast than CPUs We can use multiple disks for improving performance.
U N I V E R S I T Y O F S O U T H F L O R I D A Hadoop Alternative The Hadoop Alternative Larry Moore 1, Zach Fadika 2, Dr. Madhusudhan Govindaraju 2 1.
Modeling Billion-Node Torus Networks Using Massively Parallel Discrete-Event Simulation Ning Liu, Christopher Carothers 1.
DynamicMR: A Dynamic Slot Allocation Optimization Framework for MapReduce Clusters Nanyang Technological University Shanjiang Tang, Bu-Sung Lee, Bingsheng.
Infrastructure for Data Warehouses. Basics Of Data Access Data Store Machine Memory Buffer Memory Cache Data Store Buffer Bus Structure.
HPC HPC-5 Systems Integration High Performance Computing 1 Application Resilience: Making Progress in Spite of Failure Nathan A. DeBardeleben and John.
Fault Tolerance in Charm++ Gengbin Zheng 10/11/2005 Parallel Programming Lab University of Illinois at Urbana- Champaign.
Lawrence Livermore National Laboratory 1 Science & Technology Principal Directorate - Computation Directorate Scalable Fault Tolerance for Petascale Systems.
LLNL-PRES This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
GPFS: A Shared-Disk File System for Large Computing Clusters Frank Schmuck & Roger Haskin IBM Almaden Research Center.
Tackling I/O Issues 1 David Race 16 March 2010.
LIOProf: Exposing Lustre File System Behavior for I/O Middleware
Performing Fault-tolerant, Scalable Data Collection and Analysis James Jolly University of Wisconsin-Madison Visualization and Scientific Computing Dept.
29/04/2008ALICE-FAIR Computing Meeting1 Resulting Figures of Performance Tests on I/O Intensive ALICE Analysis Jobs.
A Tale of Two Erasure Codes in HDFS
Distributed Network Traffic Feature Extraction for a Real-time IDS
CSE 451: Operating Systems Spring 2006 Module 18 Redundant Arrays of Inexpensive Disks (RAID) John Zahorjan Allen Center.
RAID Disk Arrays Hank Levy 1.
Yu Su, Yi Wang, Gagan Agrawal The Ohio State University
湖南大学-信息科学与工程学院-计算机与科学系
RAID Disk Arrays Hank Levy 1.
CSE 451: Operating Systems Autumn 2010 Module 19 Redundant Arrays of Inexpensive Disks (RAID) Ed Lazowska Allen Center 570.
CSE 451: Operating Systems Winter 2007 Module 18 Redundant Arrays of Inexpensive Disks (RAID) Ed Lazowska Allen Center 570.
RAID Disk Arrays Hank Levy 1.
IT 344: Operating Systems Winter 2007 Module 18 Redundant Arrays of Inexpensive Disks (RAID) Chia-Chi Teng CTB
Presentation transcript:

LLNL-PRES Lawrence Livermore National Laboratory, P. O. Box 808, Livermore, CA This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 Center for Applied Scientific Computing Lawrence Livermore National Laboratory Kathryn Mohror The Scalable Checkpoint/Restart Library (SCR): Overview and Future Directions

2 LLNL-PRES Center for Applied Scientific Computing, Lawrence Livermore National Laboratory Kathryn Mohror Paradyn Week - May 2, 2011 Increased component count in supercomputers means increased failure rate  Today’s supercomputers experience failures on the order of hours  Future systems are predicted to have failures on the order of minutes  Checkpointing: periodically flush application state to a file  Parallel file system (PFS) Bandwidth from cluster to PFS at LLNL: 10’s GB/s 100’s TB to 1-2 PB of storage  Checkpoint data size varies 100’s GB to TB

3 LLNL-PRES Center for Applied Scientific Computing, Lawrence Livermore National Laboratory Kathryn Mohror Paradyn Week - May 2, 2011 Writing checkpoints to the parallel file system is very expensive Parallel File System Hera Atlas Zeus Gateway Nodes Compute Nodes Network Contention Contention for Shared File System Resources Contention from Other Clusters for File System

4 LLNL-PRES Center for Applied Scientific Computing, Lawrence Livermore National Laboratory Kathryn Mohror Paradyn Week - May 2, 2011 Failures cause loss of valuable compute time  BG/L at LLNL  192K cores  Checkpoint every 7.5 hours  Achieved 4 days of computation in 6.5 days  Atlas at LLNL  4096 cores  Checkpoint every 2 hours  minutes  MTBF 4 hours  Juno at LLNL  256 cores  Average 20 min checkpoints  25% time spent in checkpointing

5 LLNL-PRES Center for Applied Scientific Computing, Lawrence Livermore National Laboratory Kathryn Mohror Paradyn Week - May 2, 2011 Node-local storage can be utilized to reduce checkpointing costs  Observations: Only need the most recent checkpoint data. Typically just a single node failed at a time.  Idea: Store checkpoint data redundantly on compute cluster; only write a few checkpoints to parallel file system.  Node-local storage is a performance opportunity AND challenge + Scales with rest of system - Fails and degrades over time - Physically distributed - Limited resource

6 LLNL-PRES Center for Applied Scientific Computing, Lawrence Livermore National Laboratory Kathryn Mohror Paradyn Week - May 2, 2011 SCR works for codes that do globally-coordinated application-level checkpointing int main(int argc, char* argv[]) { MPI_Init(argc, argv); for(int t = 0; t < TIMESTEPS; t++) { /*... Do work... */ checkpoint(); } MPI_Finalize(); return 0; } void checkpoint() { int rank; MPI_Comm_rank(MPI_COMM_WORLD, &rank); char file[256]; sprintf(file, “rank_%d.ckpt”, rank); FILE* fs = fopen(file, “w”); if (fs != NULL) { fwrite(state,..., fs); fclose(fs); } return; }

7 LLNL-PRES Center for Applied Scientific Computing, Lawrence Livermore National Laboratory Kathryn Mohror Paradyn Week - May 2, 2011 SCR works for codes that do globally-coordinated application-level checkpointing int main(int argc, char* argv[]) { MPI_Init(argc, argv); SCR_Init(); for(int t = 0; t < TIMESTEPS; t++) { /*... Do work... */ int flag; SCR_Need_checkpoint(&flag); if (flag) checkpoint(); } SCR_Finalize(); MPI_Finalize(); return 0; } void checkpoint() { SCR_Start_checkpoint(); int rank; MPI_Comm_rank(MPI_COMM_WORLD, &rank); char file[256]; sprintf(file, “rank_%d.ckpt”, rank); char scr_file[SCR_MAX_FILENAME]; SCR_Route_file(file, scr_file); FILE* fs = fopen(scr_file, “w”); if (fs != NULL) { fwrite(state,..., fs); fclose(fs); } SCR_Complete_checkpoint(1); return; }

8 LLNL-PRES Center for Applied Scientific Computing, Lawrence Livermore National Laboratory Kathryn Mohror Paradyn Week - May 2, 2011 SCR utilizes node-local storage and the parallel file system … SCR_Start_checkpt(); SCR_Route_file(fn,fn2); … fwrite(data,…); … SCR_Complete_checkpt(); SCR_Start_checkpt(); SCR_Route_file(fn,fn2); … fwrite(data,…); … SCR_Complete_checkpt(); SCR_Start_checkpt(); SCR_Route_file(fn,fn2); … fwrite(data,…); … SCR_Complete_checkpt(); SCR_Start_checkpt(); SCR_Route_file(fn,fn2); … fwrite(data,…); … SCR_Complete_checkpt(); SCR_Start_checkpt(); SCR_Route_file(fn,fn2); … fwrite(data,…); … SCR_Complete_checkpt(); … X SCR_Start_checkpt(); SCR_Route_file(fn,fn2); … fwrite(data,…); … SCR_Complete_checkpt();

9 LLNL-PRES Center for Applied Scientific Computing, Lawrence Livermore National Laboratory Kathryn Mohror Paradyn Week - May 2, 2011 SCR uses multiple checkpoint levels for performance and resiliency Checkpoint Cost and Resliency Low High Local: Store checkpoint data on node’s local storage, e.g. disk, memory Partner: Write to local storage and on a partner node  XOR: Write file to local storage and small sets of nodes collectively compute and store parity redundancy data (RAID-5) Stable Storage: Write to parallel file system Level 1 Level 2 Level 3

10 LLNL-PRES Center for Applied Scientific Computing, Lawrence Livermore National Laboratory Kathryn Mohror Paradyn Week - May 2, 2011 Aggregate checkpoint bandwidth to node-local storage scales linearly on Coastal Parallel file system built for 10GB/s SSDs 10x faster than PFS Partner / XOR on RAM disk 100x Local on RAM disk 1,000x

11 LLNL-PRES Center for Applied Scientific Computing, Lawrence Livermore National Laboratory Kathryn Mohror Paradyn Week - May 2, 2011 Speedups achieved using SCR with PF3d Cluster Nodes Data PFS name Time BW Cache type Time BW Speedup in BW Hera 256 nodes 2.07 TB lscratchc 300 s 7.1 GB/s XOR on RAM disk 15.4 s 138 GB/s 19x Atlas 512 nodes 2.06 TB lscratcha 439 s 4.8 GB/s XOR on RAM disk 9.1 s 233 GB/s 48x Coastal 1024 nodes 2.14 TB lscratchb 1051 s 2.1 GB/s XOR on RAM disk 4.5 s 483 GB/s 234x Coastal 1024 nodes TB lscratch s 4.2 GB/s XOR on RAM disk s 603 GB/s 14x

12 LLNL-PRES Center for Applied Scientific Computing, Lawrence Livermore National Laboratory Kathryn Mohror Paradyn Week - May 2, 2011 SCR can recover from 85% of failures using checkpoints that are x faster than PFS Level 1: Local checkpoint sufficient 42Temporary parallel file system write failure (subsequent job in same allocation succeeded) 10Job hang 7Transient processor failure (floating-point exception or segmentation fault) Level 2: Partner / XOR checkpoint sufficient 104Node failure (bad power supply, failed network card, or unexplained reboot) Level 3: PFS checkpoint sufficient 23Permanent parallel file system write failure (no job in same allocation succeeded) 3Permanent hardware failure (bad CPU or memory DIMM) 2Power breaker shut off Observed 191 failures spanning 5.6 million node hours from 871 runs of PF3d on 3 different clusters (Coastal, Hera, and Atlas). 31% 54% 15%

13 LLNL-PRES Center for Applied Scientific Computing, Lawrence Livermore National Laboratory Kathryn Mohror Paradyn Week - May 2, 2011 Create a model to estimate the best parameters for SCR and predict its performance on future machines  Several parameters determine SCR’s performance: Checkpoint interval Checkpoint types and frequency, e.g. how many local checkpoints between each XOR checkpoint Checkpoint costs Failure rates  Developed a probabilistic Markov model  Metrics Efficiency: How much time is spent actually progressing the simulation  Accounts for time spent checkpointing, recovering, and recomputing Parallel file system load: Expected frequency of checkpoints to the parallel file system

14 LLNL-PRES Center for Applied Scientific Computing, Lawrence Livermore National Laboratory Kathryn Mohror Paradyn Week - May 2, 2011 How does checkpointing interval affect efficiency? C: Checkpoint Cost F: Failure Rate 1x: Today’s Values When checkpoints are rare, system efficiency depends primarily on the failure rate When checkpoints are frequent, system efficiency depends primarily on the checkpoint cost Maximum efficiency depends on checkpoint cost and failure rates

15 LLNL-PRES Center for Applied Scientific Computing, Lawrence Livermore National Laboratory Kathryn Mohror Paradyn Week - May 2, 2011 How does multi-level checkpointing compare to single- level checkpointing to the PFS? Today’s Cost PFS Checkpoint Cost, Levels

16 LLNL-PRES Center for Applied Scientific Computing, Lawrence Livermore National Laboratory Kathryn Mohror Paradyn Week - May 2, 2011 Multi-level checkpointing requires less writes to the PFS Today’s Cost More expensive checkpoints are rarer Higher failure rates require more frequent checkpoints Multi-level checkpointing requires fewer writes to parallel file system Today’s Failure Rate Expected Time Between Checkpointing to PFS (seconds) PFS Checkpoint Cost, Levels

17 LLNL-PRES Center for Applied Scientific Computing, Lawrence Livermore National Laboratory Kathryn Mohror Paradyn Week - May 2, 2011 Summary  Multi-level checkpointing library, SCR Low-cost checkpointing schemes up to 1000x faster than PFS  Failure analysis of several HPC systems 85% of failures can be recovered from low-cost checkpoints  Hierarchical Markov Model that shows benefits of multi-level checkpointing: Increased machine efficiency Reduced load on the parallel file system Advantages are expected to increase on future systems.  Can still achieve 85% efficiency on 50x less reliable systems

18 LLNL-PRES Center for Applied Scientific Computing, Lawrence Livermore National Laboratory Kathryn Mohror Paradyn Week - May 2, 2011 Current and future directions -- There’s still more work to do! Parallel File System Contention

19 LLNL-PRES Center for Applied Scientific Computing, Lawrence Livermore National Laboratory Kathryn Mohror Paradyn Week - May 2, 2011 Use an overlay network (MRNet) to write checkpoints to the PFS in a controlled way Parallel File System “Forest” of writers

20 LLNL-PRES Center for Applied Scientific Computing, Lawrence Livermore National Laboratory Kathryn Mohror Paradyn Week - May 2, 2011 Average Total I/O Time per checkpoint with and without SCR/MRNet  Single writer  Every checkpoint to the parallel file system

21 LLNL-PRES Center for Applied Scientific Computing, Lawrence Livermore National Laboratory Kathryn Mohror Paradyn Week - May 2, 2011 SCR/MRNet Integration  Still work to do for performance  Current asynchronous drain uses a single writer Forest  Although I/O time is greatly improved, there’s a scalability problem in SCR_Complete_checkpoint Current implementation uses a single writer and takes too long to drain the checkpoints at larger scales

22 LLNL-PRES Center for Applied Scientific Computing, Lawrence Livermore National Laboratory Kathryn Mohror Paradyn Week - May 2, 2011 Compress checkpoints to reduce checkpointing overheads Parallel File System A0=A0= A1=A1= A2=A2=A3=A3= Partition array A Interleave array A Compress array A ~70% reduction in checkpoint file size!

23 LLNL-PRES Center for Applied Scientific Computing, Lawrence Livermore National Laboratory Kathryn Mohror Paradyn Week - May 2, 2011 Comparison of N->N and N->M Checkpointing

24 LLNL-PRES Center for Applied Scientific Computing, Lawrence Livermore National Laboratory Kathryn Mohror Paradyn Week - May 2, 2011 Summary of Compression Effectiveness Comp Factor = (uncompressed – compressed) / compressed * 100

25 LLNL-PRES Center for Applied Scientific Computing, Lawrence Livermore National Laboratory Kathryn Mohror Paradyn Week - May 2, 2011 The MRNet nodes add extra levels of resiliency Parallel File System Geographically disperse nodes in an XOR set for increased resiliency XX X X X

26 LLNL-PRES Center for Applied Scientific Computing, Lawrence Livermore National Laboratory Kathryn Mohror Paradyn Week - May 2, 2011 Thanks!  Adam Moody, Greg Bronevetsky, Bronis de Supinski (LLNL)  Tanzima Islam, Saurabh Bagchi, Rudolf Eigenmann (Purdue)  For more information Open source, BSD license: Adam Moody, Greg Bronevetsky, Kathryn Mohror, Bronis R. de Supinski, "Design, Modeling, and Evaluation of a Scalable Multi-level Checkpointing System," LLNL- CONF , SC’10.