Presentation is loading. Please wait.

Presentation is loading. Please wait.

Understanding System Characteristics of Online Erasure Coding on Scalable, Distributed and Large-Scale SSD Array Systems Sungjoon Koh, Jie Zhang, Miryeong.

Similar presentations


Presentation on theme: "Understanding System Characteristics of Online Erasure Coding on Scalable, Distributed and Large-Scale SSD Array Systems Sungjoon Koh, Jie Zhang, Miryeong."— Presentation transcript:

1 Understanding System Characteristics of Online Erasure Coding on Scalable, Distributed and Large-Scale SSD Array Systems Sungjoon Koh, Jie Zhang, Miryeong Kwon, Jungyeon Yoon, David Donofrio, Nam Sung Kim and Myoungsoo Jung

2 Take-away Motivation: Distributed systems are starting to adopt erasure coding as a fault tolerance mechanism instead of replication due to its storage overheads. Goal: Understanding system characteristics of online erasure coding by analyzing and comparing them with those of replication. Observations in Online Erasure Coding: Up to 13× I/O performance degradation compared to replication. 50% CPU usage and lots of context switches. Up to 700× I/O amplification more than total request volumes. Up to 500× network traffics among the storage nodes compared to total request amount. Summary of Our Work: We observe and measure various overheads imposed by online erasure coding quantitatively on a distributed system that consists of 52 SSDs. Collect block-level traces from all-flash array based storage clusters, which can be downloaded freely.

3 Overall Results

4 Introduction Demands on scalable, high performance
distributed storage system

5 Lower power consumption
Employing SSDs to HPC & DC Systems HPC & DC Higher bandwidth Shorter latency & Lower power consumption HDD SSD

6 “So we need fault tolerance mechanism.”
Storage System Failures Typically, storage systems have regular failures. 1) Storage failure ex) Facebook reports; Up to 3% HDDs fails each day (Ref. M. Sathiamoorthy et al., “Xoring elephants: Novel erasure codes for big data,” in PVLDB, 2013.) Although SSDs have higher reliability than HDDs, daily failure cannot be ignored. 2) Network switch errors, power outages, and soft/hard errors “So we need fault tolerance mechanism.”

7 Need an alternative method to reduce the storage overheads.
Fault Tolerance Mechanisms in Distributed System Replication Need an alternative method to reduce the storage overheads. Traditional fault tolerance mechanism. Simple and effective way to make system resilient. High storage overheads (3x). Especially for SSD, replication causes high expense because of SSD’s high cost per GB. causes performance degradation due to SSD’s specific characteristics . ex) Garbage collection, wearing out…

8 Fault Tolerance Mechanisms in Distributed System
Erasure coding We observed significant overheads imposed during I/O services in distributed system employing erasure codes. Alternative method of replication. Lower storage overheads than replication. High reconstruction costs. (well known problem) ex) Facebook cluster with EC increases network traffics by more than 100TB in a day. Many researches try to reduce reconstruction costs.

9 Background Reed-Solomon: Erasure coding algorithm.
Ceph: Distributed system used in this research. Architecture Data path Storage stack

10 Reed-Solomon D0 D1 ... Dk Data C0 C1 Cm D0 D1 D2 D3 C0 C1 C2 Stripe
The most famous erasure coding algorithm. Divide data into k equal data chunks and generates m coding chunks. Encoding: Multiplication of a generator matrix and data chunks as a vector. Stripe: k data chunks. Write Reed-Solomon with k data chunks and m coding chunks as “RS(k,m)”. Can be recovered from m failures. D0 D1 ... Dk Data C0 C1 Cm Chunks Coding Data Chunks D0 D1 D2 D3 C0 C1 Generator Matrix C2 Stripe RS(4,3) 3 failures Coding Chunks

11 Ceph Architecture Client nodes are connected to the storage nodes through “public network”. Storage nodes are connected through “private network”. Each storage node consists of several object storage device daemons (OSDs) and monitors. OSDs handle read/write services. Monitors manage the access permissions and the status of multiple OSDs.

12 Data Path “In detail” File/block is handled as an object.
Object is assigned to placement group (PG) consists of several OSDs according to the result of hash function. CRUSH algorithm determines primary OSD in PG. Object is sent to primary OSD. Primary OSD sends object to another OSDs (secondary, tertiary, …) as a form of replica/chunk depending on the fault tolerance mechanism. “In detail”

13 Storage Stack “Implemented in user space.”

14 Analysis Overview 1) Overall performance.
throughput & latency 2) CPU utilization & # context switches. 3) Actual amount of reads & writes served from disks. 4) Private network traffics.

15 Generate object with coding chunks
Object Management in Erasure Coding Observe that erasure coding has different object management scheme with replication. To manage data/coding chunks. Two phases: object initialization & object update. i) Object initialization. 𝑘KB Write Dummy Data Generate object with coding chunks <4MB Object>

16 ii) Generate coding chunks
Object Management in Erasure Coding ii) Object update. To be updated Data Chunk 0 Data Chunk 1 Data Chunk 2 Data Chunk 3 Data Chunk 4 Data Chunk 5 i) Read whole stripe iii) Write to storage Coding Chunk 0 Coding Chunk 1 Coding Chunk 2 ii) Generate coding chunks

17 Micro benchmark: Flexible I/O (FIO)
Workload Description 3-replication, RS(6,3), RS(10,4) Micro benchmark: Flexible I/O (FIO) Request Size (KB) 1, 2, 4, 8, 16, 32, 64, 128 Access Type Sequential Random Pre-write X O Operation Type Write Read Wrie

18 Analysis Overview 1) Overall performance.
throughput & latency 2) CPU utilization & # context switches. 3) Actual amount of reads & writes served from disks. 4) Private network traffics.

19 Performance Comparison (Sequential Write)
X 12.9 (MAX) X 11.3 (MAX) Significant performance degradation in Reed-Solomon. Throughput: 11.3× worse in RS (max) Latency: 12.9× longer in RS (max) Degradation in request size 4~16KB is not acceptable Computation for encoding, data management, and additional network traffic causes degradation in erasure coding.

20 Stripe Performance Comparison (Sequential Read) “RS-Concatenation”
Data Chunk 0 Data Chunk 1 Data Chunk 2 Data Chunk 3 Data Chunk 4 Data Chunk 5 X 3.4 X 3.4 Stripe “RS-Concatenation” Performance degradation in Reed-Solomon. Throughput: 3.4× worse in RS (4KB) Latency: 3.4× longer in RS (4KB) Even though there was no failure, performance degradation occurred. Caused by RS-concatenation, which generates extra data transfers.

21 Analysis Overview 1) Overall performance.
throughput & latency 2) CPU utilization & # context switches. 3) Actual amount of reads & writes served from disks. 4) Private network traffics.

22 Computing and Software Overheads (CPU Utilization)
<Random Write> <Random Read> 70~75% RS requires much more CPU cycles than replication. User mode CPU utilizations account for 70~75% of total CPU cycles. Uncommon in RAID systems Implemented at the user level. (ex. OSD daemon, PG backend, fault tolerant modules)

23 Computing and software overheads (Context Switch)
<Random Write> <Random Read> Relative number of context switches= The number of context switches Total amount of request (MB) Much more context switches occur in RS than replication. Read: Data transfers through OSDs and computations during RS-concatenation. Write: 1) Initializing object has lots of writes, and significant amount of computations. 2) Updating object introduces many transfers among OSDs through user-level modules.

24 Analysis Overview 1) Overall performance.
throughput & latency 2) CPU utilization & # context switches. 3) Actual amount of reads & writes served from disks. 4) Private network traffics.

25 I/O Amplification (Random Write)
<Write Amplification> <Read Amplification> I/O amplification= Read/write amount from storage(MB) Total amount of request(MB) Erasure coding causes write amplification up to 700 times more than total request volume. Why is write amplification by random writes so big? Random: Mostly initialize objects. (Initialize Initialize … Initialize Update) Sequential: Mostly update objects. (Initialize Update … Update Initialize)

26 <Sequential Read>
I/O Amplification (Read) <Random Read> <Sequential Read> Read amplification caused by RS-concatenation. Random read: Mostly reads different stripes.  Lots of read amplifications. Sequential read: Consecutive I/O requests read data from same stripe.  no read amplifications.

27 Analysis Overview 1) Overall performance.
throughput & latency 2) CPU utilization & # context switches. 3) Actual amount of reads & writes served from disks. 4) Private network traffics.

28 Network Traffics Among the Storage Nodes
<Random Write> <Random Read> Show similar trend with I/O amplifications. Erasure coding Write: initializing & updating objects in erasure coding cause lots of network traffics. Read: RS-concatenation cause lots of network traffics. - Replication exhibits only minimum data transfers related to necessary communications. (ex. OSD interaction: monitoring the status of each OSD)

29 Conclusion We studied the overheads imposed by erasure coding on a distributed SSD array system. In contrast to the common expectation on erasure codes, we observed that they exhibit heavy network traffic and more I/O amplification than replication. Also erasure coding requires much more CPU cycles and context switches than replication due to user-level implementation.

30 Q&A

31 observed by random writes on pristine image & random overwrites
Object Management in Erasure Coding Obejct initialize Obejct update Time series analysis for CPU utilization, context switches, private network throughput observed by random writes on pristine image & random overwrites


Download ppt "Understanding System Characteristics of Online Erasure Coding on Scalable, Distributed and Large-Scale SSD Array Systems Sungjoon Koh, Jie Zhang, Miryeong."

Similar presentations


Ads by Google