High Availability in Clustered Multimedia Servers Renu Tewari Daniel M. Dias Rajat Mukherjee Harrick M. Vin.

Slides:



Advertisements
Similar presentations
A Case for Redundant Arrays Of Inexpensive Disks Paper By David A Patterson Garth Gibson Randy H Katz University of California Berkeley.
Advertisements

RAID (Redundant Arrays of Independent Disks). Disk organization technique that manages a large number of disks, providing a view of a single disk of High.
RAID Oh yes Whats RAID? Redundant Array (of) Independent Disks. A scheme involving multiple disks which replicates data across multiple drives. Methods.
CSCE430/830 Computer Architecture
Distributed Multimedia Systems
RAID- Redundant Array of Inexpensive Drives. Purpose Provide faster data access and larger storage Provide data redundancy.
RAID Redundant Arrays of Inexpensive Disks –Using lots of disk drives improves: Performance Reliability –Alternative: Specialized, high-performance hardware.
Chapter 3 Presented by: Anupam Mittal.  Data protection: Concept of RAID and its Components Data Protection: RAID - 2.
CSE521: Introduction to Computer Architecture Mazin Yousif I/O Subsystem RAID (Redundant Array of Independent Disks)
Availability in Globally Distributed Storage Systems
Lecture 36: Chapter 6 Today’s topic –RAID 1. RAID Redundant Array of Inexpensive (Independent) Disks –Use multiple smaller disks (c.f. one large disk)
Modularized Redundant Parallel Virtual System
Computer ArchitectureFall 2007 © November 28, 2007 Karem A. Sakallah Lecture 24 Disk IO and RAID CS : Computer Architecture.
6/5/ TRAP-Array: A Disk Array Architecture Providing Timely Recovery to Any Point-in-time Authors: Qing Yang,Weijun Xiao,Jin Ren University of Rhode.
1 Node Selection For a Fault- Tolerant Streaming Service On A Peer-to-Peer Network Hyunjoo Kim, Sooyong Kang and Yeom H.Y.
Lecture 3: A Case for RAID (Part 1) Prof. Shahram Ghandeharizadeh Computer Science Department University of Southern California.
1 Lecture 24: Interconnection Networks Topics: communication latency, centralized and decentralized switches (Sections 8.1 – 8.5)
Efficient Support for Interactive Browsing Operations in Clustered CBR Video Servers IEEE Transactions on Multimedia, Vol. 4, No.1, March 2002 Min-You.
CS533 Concepts of Operating Systems Class 18 File Systems.
Storage Networks How to Handle Heterogeneity Bálint Miklós January 24th, 2005 ETH Zürich External Memory Algorithms and Data Structures.
Peer-to-peer Multimedia Streaming and Caching Service by Won J. Jeon and Klara Nahrstedt University of Illinois at Urbana-Champaign, Urbana, USA.
Loopback: Exploiting Collaborative Caches for Large-Scale Streaming Ewa Kusmierek, Yingfei Dong, Member, IEEE, and David H. C. Du, Fellow, IEEE.
Reducing Bandwidth Requirement for Delivering Video Over Wide Area Networks With Proxy Server Wei-hsiu Ma and David H. C. Du IEEE Transactions on Multimedia,
I/O Systems and Storage Systems May 22, 2000 Instructor: Gary Kimura.
CS Spring 2012 CS 414 – Multimedia Systems Design Lecture 34 – Media Server (Part 3) Klara Nahrstedt Spring 2012.
THE DESIGN AND IMPLEMENTATION OF A LOG-STRUCTURED FILE SYSTEM M. Rosenblum and J. K. Ousterhout University of California, Berkeley.
RAID-x: A New Distributed Disk Array for I/O-Centric Cluster Computing Kai Hwang, Hai Jin, and Roy Ho.
Storage System: RAID Questions answered in this lecture: What is RAID? How does one trade-off between: performance, capacity, and reliability? What is.
Chapter 6 RAID. Chapter 6 — Storage and Other I/O Topics — 2 RAID Redundant Array of Inexpensive (Independent) Disks Use multiple smaller disks (c.f.
DISKS IS421. DISK  A disk consists of Read/write head, and arm  A platter is divided into Tracks and sector  The R/W heads can R/W at the same time.
RAID: High-Performance, Reliable Secondary Storage Mei Qing & Chaoxia Liao Nov. 20, 2003.
Two or more disks Capacity is the same as the total capacity of the drives in the array No fault tolerance-risk of data loss is proportional to the number.
©2001 Pål HalvorsenINFOCOM 2001, Anchorage, April 2001 Integrated Error Management in MoD Services Pål Halvorsen, Thomas Plagemann, and Vera Goebel University.
Architecture of intelligent Disk subsystem
B. Prabhakaran1 Multimedia Storage & Retrieval Large sizes as well as real-time requirements of multimedia objects influence their storage and retrieval.
TPT-RAID: A High Performance Multi-Box Storage System
Lecture 9 of Advanced Databases Storage and File Structure (Part II) Instructor: Mr.Ahmed Al Astal.
Parity Logging O vercoming the Small Write Problem in Redundant Disk Arrays Daniel Stodolsky Garth Gibson Mark Holland.
CSI-09 COMMUNICATION TECHNOLOGY FAULT TOLERANCE AUTHOR: V.V. SUBRAHMANYAM.
STEALTH Content Store for SharePoint using Caringo CAStor  Boosting your SharePoint to the MAX! "Optimizing your Business behind the scenes"
IMPROUVEMENT OF COMPUTER NETWORKS SECURITY BY USING FAULT TOLERANT CLUSTERS Prof. S ERB AUREL Ph. D. Prof. PATRICIU VICTOR-VALERIU Ph. D. Military Technical.
RAPID-Cache – A Reliable and Inexpensive Write Cache for Disk I/O Systems Yiming Hu Qing Yang Tycho Nightingale.
Hadoop Hardware Infrastructure considerations ©2013 OpalSoft Big Data.
4.1 © 2004 Pearson Education, Inc. Exam Managing and Maintaining a Microsoft® Windows® Server 2003 Environment Lesson 4: Organizing a Disk for Data.
Scalable Web Server on Heterogeneous Cluster CHEN Ge.
Building a Parallel File System Simulator E Molina-Estolano, C Maltzahn, etc. UCSC Lab, UC Santa Cruz. Published in Journal of Physics, 2009.
RAID SECTION (2.3.5) ASHLEY BAILEY SEYEDFARAZ YASROBI GOKUL SHANKAR.
"1"1 Introduction to Managing Data " Describe problems associated with managing large numbers of disks " List requirements for easily managing large amounts.
Serverless Network File Systems Overview by Joseph Thompson.
CS 153 Design of Operating Systems Spring 2015 Lecture 22: File system optimizations.
COSC 3330/6308 Solutions to the Third Problem Set Jehan-François Pâris November 2012.
Improving Disk Throughput in Data-Intensive Servers Enrique V. Carrera and Ricardo Bianchini Department of Computer Science Rutgers University.
RAID Systems Ver.2.0 Jan 09, 2005 Syam. RAID Primer Redundant Array of Inexpensive Disks random, real-time, redundant, array, assembly, interconnected,
)1()1( Presenter: Noam Presman Advanced Topics in Storage Systems – Semester B 2013 Authors: A.Cidon, R.Stutsman, S.Rumble, S.Katti,
NUS.SOC.CS5248 Ooi Wei Tsang 1 Course Matters. NUS.SOC.CS5248 Ooi Wei Tsang 2 Make-Up Lecture This Saturday, 23 October TR7, 1-3pm Topic: “CPU scheduling”
Parallel IO for Cluster Computing Tran, Van Hoai.
GPFS: A Shared-Disk File System for Large Computing Clusters Frank Schmuck & Roger Haskin IBM Almaden Research Center.
CS Spring 2009 CS 414 – Multimedia Systems Design Lecture 27 – Media Server (Part 2) Klara Nahrstedt Spring 2009.
RAID Technology By: Adarsha A,S 1BY08A03. Overview What is RAID Technology? What is RAID Technology? History of RAID History of RAID Techniques/Methods.
RAID TECHNOLOGY RASHMI ACHARYA CSE(A) RG NO
Network-Attached Storage. Network-attached storage devices Attached to a local area network, generally an Ethernet-based network environment.
RAID Non-Redundant (RAID Level 0) has the lowest cost of any RAID
Vladimir Stojanovic & Nicholas Weaver
CSE451 I/O Systems and the Full I/O Path Autumn 2002
Unit OS10: Fault Tolerance
RAID RAID Mukesh N Tekwani
Data Orgnization Frequently accessed data on the same storage device?
Network Systems and Throughput Preservation
RAID RAID Mukesh N Tekwani April 23, 2019
Presentation transcript:

High Availability in Clustered Multimedia Servers Renu Tewari Daniel M. Dias Rajat Mukherjee Harrick M. Vin

Topics Problem: high availability in clustered multimedia servers Schemes for high availability Details of some schemes for high availability Simulation Cost performance analysis Conclusion

The problem the author addressed High availability in clustered multimedia servers Clustered multimedia servers consists of a set of processing node, each with a local disk array, connected by a high bandwidth switch or network. High availability requires the servers can provide continuous delivery in the presence of failures, including disk failure and node failure.

Architecture of clustered multimedia servers Front end performs delivery. Back end provides storage.

Architecture of clustered multimedia servers Front end fails Stream can be resumed from another delivery node. Back end fails Has system-wide effect. It is the major concern in this paper.

Schemes for high availability Mirroring – Disk level mirroring – Block level mirroring

Schemes for high availability Twin-tailing/Multi Tailing Used to handle node failure. A buddy node can access the others disks in case of failure.

Schemes for high availability Software raid – Sequential parity placement – Random parity placement

Sequential parity placement Disk numbered i, is attached to a node numbered i mod N, N=number of nodes Parity group, if size=5, DDDDPDDDDP …

Sequential parity placement

Random parity placement Constraint Blocks belonging to the same parity group are not placed on the same disk or on any disk on the same node.

Random parity placement

Some simulation results

Comparison between spp and rpp According to the simulation, RPP ’ s performance is better than SPP. RPP needs much more meta-data. Increasing the read-ahead buffer size,with RPP, the loss can be reduced substantially. To decrease the amount of meta data of RPP, SPP with multiple strides can be used.

Cost analytical model Queue theory Each disk behaves like an M/D/1 queue.

Conclusions Sequential placement of parity can only balance the space and bandwidth utilization of all disks during normal operation. Balanced random placement of parity can achieve during failure and normal operation. Mirroring is cost effective only when missing the real time constraints during failure is more expensive than the extra disk capacity. Larger parity group sizes have smaller disk space overhead but larger memory costs for buffering. Tighter loss criteria require small parity group size.

Questions Name 3 schemes for high availability. In sequential parity placement, why should you choose the parity group size to be relatively prime to the number of disks?