© 2006 EMC Corporation. All rights reserved. Business Continuity: Remote Replication Module 4.4.

Slides:



Advertisements
Similar presentations
Manage Your Risk with Business Continuity and Disaster Recovery
Advertisements

Manage Your Risk with Business Continuity and Disaster Recovery
© 2010 IBM Corporation ® Tivoli Storage Productivity Center for Replication Billy Olsen.
Module – 9 Introduction to Business continuity
Business Continuity Section 3(chapter 8) BC:ISMDR:BEIT:VIII:chap8:Madhu N PIIT1.
© 2009 EMC Corporation. All rights reserved. Introduction to Business Continuity Module 3.1.
1 Disk Based Disaster Recovery & Data Replication Solutions Gavin Cole Storage Consultant SEE.
Oracle Data Guard Ensuring Disaster Recovery for Enterprise Data
Determining BC/DR Methods Recovery Time Objective – (RTO) Time needed to recover from a disaster How long can you afford to be without your systems Recovery.
June 23rd, 2009Inflectra Proprietary InformationPage: 1 SpiraTest/Plan/Team Deployment Considerations How to deploy for high-availability and strategies.
1 © Copyright 2010 EMC Corporation. All rights reserved. EMC RecoverPoint/Cluster Enabler for Microsoft Failover Cluster.
Module – 11 Local Replication
Multiple Replicas. Remote Replication. DR in practice.
Module – 12 Remote Replication
Intelligent Storage Systems
Section 3 : Business Continuity Lecture 29. After completing this chapter you will be able to:  Discuss local replication and the possible uses of local.
1© Copyright 2011 EMC Corporation. All rights reserved. EMC RECOVERPOINT/ CLUSTER ENABLER FOR MICROSOFT FAILOVER CLUSTER.
Storage Networking Technologies and Virtualization Section 2 DAS and Introduction to SCSI1.
© 2006 EMC Corporation. All rights reserved. Managing the Data Center Section 5.2.
Module 14: Scalability and High Availability. Overview Key high availability features available in Oracle and SQL Server Key scalability features available.
1© Copyright 2012 EMC Corporation. All rights reserved. November 2013 Oracle Continuous Availability – Technical Overview.
Implementing Failover Clustering with Hyper-V
© 2009 EMC Corporation. All rights reserved. Intelligent Storage Systems Module 1.4.
IBM TotalStorage ® IBM logo must not be moved, added to, or altered in any way. © 2007 IBM Corporation Break through with IBM TotalStorage Business Continuity.
1 © Copyright 2009 EMC Corporation. All rights reserved. Agenda Storing More Efficiently  Storage Consolidation  Tiered Storage  Storing More Intelligently.
Remote Replication Chapter 14(9.3) ISMDR:BEIT:VIII:chap9.3:Madhu N:PIIT1.
Storage Area Networks The Basics. Storage Area Networks SANS are designed to give you: More disk space Multiple server access to a single disk pool Better.
Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.
Computer communication
Data Center Infrastructure
Implementing Multi-Site Clusters April Trần Văn Huệ Nhất Nghệ CPLS.
Business Continuity and Disaster Recovery Chapter 8 Part 2 Pages 914 to 945.
Technology Spotlight on
Chapter 8 Implementing Disaster Recovery and High Availability Hands-On Virtual Computing.
Chapter 5 Section 2 : Storage Networking Technologies and Virtualization.
Module – 4 Intelligent storage system
Intorduction to Lumentis
DB-2: OpenEdge® Replication: How to get Home in Time … Brian Bowman Sr. Solutions Engineer Sandy Caiado Sr. Solutions Engineer.
Copyright © 2014 EMC Corporation. All Rights Reserved. VNX Block Local Replication Principles Upon completion of this module, you should be able to: Explain.
Continuous Access Overview Damian McNamara Consultant.
Storage 101: Bringing Up SAN Garry Moreau Senior Staff Alliance Consultant Ciena Communications (763)
Copyright © 2014 EMC Corporation. All Rights Reserved. SnapView Snapshot Upon completion of this module, you should be able to: Describe SnapView Snapshot.
1 Data Guard. 2 Data Guard Reasons for Deployment  Site Failures  Power failure  Air conditioning failure  Flooding  Fire  Storm damage  Hurricane.
Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.
© 2006 EMC Corporation. All rights reserved. Business Continuity: Local Replication Module 4.3.
© 2009 EMC Corporation. All rights reserved. Remote Replication Module 3.4.
SnapView Clones Upon completion of this module, you should be able to:
Remote Data Mirroring Solutions for High Availability David Arrigo EMC Corporation
High Availability in DB2 Nishant Sinha
 The End to the Means › (According to IBM ) › 03.ibm.com/innovation/us/thesmartercity/in dex_flash.html?cmp=blank&cm=v&csr=chap ter_edu&cr=youtube&ct=usbrv111&cn=agus.
© 2006 EMC Corporation. All rights reserved. The Host Environment Module 2.1.
Internet Protocol Storage Area Networks (IP SAN)
Virtual Machine Movement and Hyper-V Replica
© 2009 IBM Corporation Statements of IBM future plans and directions are provided for information purposes only. Plans and direction are subject to change.
© 2007 EMC Corporation. All rights reserved. Internet Protocol Storage Area Networks (IP SAN) Module 3.4.
Software Version: DSS ver up85 Presentation updated: September 2011 Step-by-Step Guide to Asynchronous Data (File) Replication (File Based) within.
Step-by-Step Guide to Asynchronous Data (File) Replication (File Based) over a WAN Supported by Open-E ® DSS™ Software Version: DSS ver up85 Presentation.
Network Attached Storage Overview
Metro Mirror, Global Copy, and Global Mirror Quick Reference
Determining BC/DR Methods
Module – 11 Local Replication
AlwaysOn Mirroring, Clustering
Direct Attached Storage and Introduction to SCSI
Introduction to Networks
Direct Attached Storage and Introduction to SCSI
2018 Huawei H Real Questions Killtest
Disaster Recovery Services
Microsoft Azure P wer Lunch
SpiraTest/Plan/Team Deployment Considerations
Using the Cloud for Backup, Archiving & Disaster Recovery
Presentation transcript:

© 2006 EMC Corporation. All rights reserved. Business Continuity: Remote Replication Module 4.4

© 2006 EMC Corporation. All rights reserved. Business Continuity – Remote Replication - 2 Remote Replication After completing this module, you will be able to:  Explain Remote Replication Concepts – Synchronous/Asynchronous – Connectivity Options  Discuss Host and Array based Remote Replication Technologies – Functionality – Differences – Considerations – Selecting the appropriate technology

© 2006 EMC Corporation. All rights reserved. Business Continuity – Remote Replication - 3 Remote Replication Concepts  Replica is available at a remote facility – Could be a few miles away or half way around the world – Backup and Vaulting are not considered remote replication  Synchronous Replication – Replica is identical to source at all times – Zero RPO  Asynchronous Replication – Replica is behind the source by a finite margin – Small RPO  Connectivity – Network infrastructure over which data is transported from source site to remote site

© 2006 EMC Corporation. All rights reserved. Business Continuity – Remote Replication - 4 Synchronous Replication  A write has to be secured on the remote replica and the source before it is acknowledged to the host  Ensures that the source and remote replica have identical data at all times – Write ordering is maintained at all times  Replica receives writes in exactly the same order as the source  Synchronous replication provides the lowest RPO and RTO – Goal is zero RPO – RTO is as small as the time it takes to start application on the remote site Data Write Data Acknowledgement Server Disk

© 2006 EMC Corporation. All rights reserved. Business Continuity – Remote Replication - 5 Synchronous Replication  Response Time Extension – Application response time will be extended due to synchronous replication  Data must be transmitted to remote site before write can be acknowledged  Time to transmit will depend on distance and bandwidth  Bandwidth – To minimize impact on response time, sufficient bandwidth must be provided for at all times  Rarely deployed beyond 200 km Average Time Writes MB/s Max

© 2006 EMC Corporation. All rights reserved. Business Continuity – Remote Replication - 6 Asynchronous Replication  Write is acknowledged to host as soon as it is received by the source  Data is buffered and sent to remote – Some vendors maintain write ordering – Other vendors do not maintain write ordering, but ensure that the replica will always be a consistent re-startable image  Finite RPO – Replica will be behind the Source by a finite amount – Typically configurable Data Write Data Acknowledgement Server Disk

© 2006 EMC Corporation. All rights reserved. Business Continuity – Remote Replication - 7 Asynchronous Replication  Response Time unaffected  Bandwidth – Need sufficient bandwidth on average  Buffers – Need sufficient buffers  Can be deployed over long distances Average Time Writes MB/s Max

© 2006 EMC Corporation. All rights reserved. Business Continuity – Remote Replication - 8 Remote Replication Technologies  Host based – Logical Volume Manager (LVM)  Synchronous/Asynchronous – Log Shipping  Storage Array based – Synchronous – Asynchronous – Disk Buffered - Consistent PITs  Combination of Local and Remote Replication

© 2006 EMC Corporation. All rights reserved. Business Continuity – Remote Replication - 9 LVM Based Remote Replication Network Volume Group Physical Volume 1 Physical Volume 2 Physical Volume 3 Physical Volume 1 Physical Volume 2 Physical Volume 3 Local SiteRemote Site Volume Group Log  Duplicate Volume Groups at local and remote sites  All writes to the source Volume Group are replicated to the remote Volume Group by the LVM – Synchronous or Asynchronous

© 2006 EMC Corporation. All rights reserved. Business Continuity – Remote Replication - 10 LVM Based Remote Replication  In the event of a network failure – Writes are queued in the log file – When the issue is resolved the queued writes are sent over to the remote – The maximum size of the log file determines the length of outage that can be withstood  In the event of a failure at the source site, production operations can be transferred to the remote site

© 2006 EMC Corporation. All rights reserved. Business Continuity – Remote Replication - 11 LVM Based Remote Replication  Advantages – Different storage arrays and RAID protection can be used at the source and remote sites – Standard IP network can be used for replication – Response time issue can be eliminated with asynchronous mode, with extended RPO  Disadvantages – Extended network outages require large log files – CPU overhead on host  For maintaining and shipping log files

© 2006 EMC Corporation. All rights reserved. Business Continuity – Remote Replication - 12 Host Based Log Shipping  Offered by most DB Vendors  Advantages – Minimal CPU overhead – Low bandwidth – Standby Database consistent to last applied log Original Logs Stand By Logs IP Network

© 2006 EMC Corporation. All rights reserved. Business Continuity – Remote Replication - 13 Array Based – Remote Replication  Replication performed by the Array Operating Environment – Host CPU resources can be devoted to production operations instead of replication operations – Arrays communicate with each other via dedicated channels  ESCON, Fibre Channel or Gigabit Ethernet  Replicas are on different arrays – Primarily used for DR purposes – Can also be used for other BC operations Production Array Remote Array Distance Source Replica Network DR Server Production Server

© 2006 EMC Corporation. All rights reserved. Business Continuity – Remote Replication - 14 Array Based – Synchronous Replication Network links Write is received by the source array from host/serverWrite is transmitted by source array to the remote arrayRemote array sends acknowledgement to the source array Source array signals write complete to host/server SourceTarget

© 2006 EMC Corporation. All rights reserved. Business Continuity – Remote Replication - 15 Array Based – Asynchronous Replication  No impact on response time  Extended distances between arrays  Lower bandwidth as compared to Synchronous Network links Write is received by the source array from host/server Write is transmitted by source array to the remote arraySource array signals write complete to host/server Remote array sends acknowledgement to the source array SourceTarget

© 2006 EMC Corporation. All rights reserved. Business Continuity – Remote Replication - 16 Array Based – Asynchronous Replication  Ensuring Consistency – Maintain write ordering  Some vendors attach a time stamp and sequence number with each of the writes, then ship the writes to the remote array and apply the writes to the remote devices in the exact order based on the time stamp and sequence numbers  Remote array applies the writes in the exact order they were received, just like synchronous – Dependent write consistency  Some vendors buffer the writes in the cache of the source array for a period of time (between 5 and 30 seconds)  At the end of this time the current buffer is closed in a consistent manner and the buffer is switched, new writes are received in the new buffer  The closed buffer is then transmitted to the remote array  Remote replica will contain a consistent, re-startable image on the application

© 2006 EMC Corporation. All rights reserved. Business Continuity – Remote Replication - 17 Array based – Disk Buffered Consistent PITs  Local and Remote replication technologies can be combined to create consistent PIT copies of data on remote arrays  RPO usually in the order of hours  Lower Bandwidth requirements  Extended distance solution

© 2006 EMC Corporation. All rights reserved. Business Continuity – Remote Replication - 18 Extended Distance Consistent PIT  Create a Consistent PIT Local Replica on Source Array  Create a Remote Replica of this Local Replica  Optionally create another replica of the Remote replica on the remote array if needed  Repeat…as automation, link bandwidth, change rate permit SOURCE REMOTE Network Links Remote Replica Local Replica Source

© 2006 EMC Corporation. All rights reserved. Business Continuity – Remote Replication - 19 Synchronous + Extended Distance Consistent PIT  Synchronous replication between the Source and Bunker Site  Create consistent PIT Local Replica at bunker  Create Remote Replica of bunker Local Replica  Optionally create additional Local Replica at Target site from the Remote Replica if needed  Repeat…as automation, link bandwidth, change rate permit SOURCEREMOTEBUNKER Sync SourceRemote Replica Local Replica Remote Replica Network Links

© 2006 EMC Corporation. All rights reserved. Business Continuity – Remote Replication - 20 Remote Replicas – Tracking Changes  Remote replicas can be used for BC Operations – Typically remote replication operations will be suspended when the remote replicas are used for BC Operations  During BC Operations changes will/could happen to both the source and remote replicas – Most remote replication technologies have the ability to track changes made to the source and remote replicas to allow for incremental re-synchronization – Resuming remote replication operations will require re- synchronization between the source and replica

© 2006 EMC Corporation. All rights reserved. Business Continuity – Remote Replication - 21 Primary Site Failure – Operations at Remote Site  Remote replicas are typically not available for use while the replication session is in progress  In the event of a primary site failure the replicas have to be made accessible for use  Create a local replica of the remote devices at the remote site  Start operations at the Remote site – No remote protection while primary site issues are resolved  After issue resolution at Primary Site – Stop activities at remote site – Restore latest data from remote devices to source – Resume operations at Primary (Source) Site

© 2006 EMC Corporation. All rights reserved. Business Continuity – Remote Replication - 22 Array Based – Which Technology?  Synchronous – Is a must if zero RPO is required – Need sufficient bandwidth at all times – Application response time elongation will prevent extended distance solutions (rarely above 125 miles)  Asynchronous – Extended distance solutions with minimal RPO (order of minutes) – No Response time elongation – Generally requires lower Bandwidth than synchronous – Must design with adequate cache/buffer or sidefile/logfile capacity  Disk Buffered Consistent PITs – Extended distance solution with RPO in the order of hours – Generally lower bandwidth than synchronous or asynchronous

© 2006 EMC Corporation. All rights reserved. Business Continuity – Remote Replication - 23 Storage Array Based – Remote Replication  Network Options – Most vendors support ESCON or Fibre Channel adapters for remote replication  Can connect to any optical or IP networks with appropriate protocol converters for extended distances  DWDM  SONET  IP Networks – Some Vendors have native Gigabit Ethernet adapters which allows the array to be connected directly to IP Networks without the need for protocol converters

© 2006 EMC Corporation. All rights reserved. Business Continuity – Remote Replication - 24 Dense Wavelength Division Multiplexing (DWDM)  DWDM is a technology that puts data from different sources together on an optical fiber with each signal carried on its own separate light wavelength (commonly referred to as a lambda or ).  Up to 32 protected and 64 unprotected separate wavelengths of data can be multiplexed into a light stream transmitted on a single optical fiber. ESCON Fibre Channel Gigabit Ethernet Optical Channels Optical Electrical Optical Lambda λ

© 2006 EMC Corporation. All rights reserved. Business Continuity – Remote Replication - 25 Synchronous Optical Network (SONET)  SONET is Time Division Multiplexing (TDM) technology where traffic from multiple subscribers is multiplexed together and sent out onto the SONET ring as an optical signal  Synchronous Digital Hierarchy (SDH) similar to SONET but is the European standard  SONET/SDH, offers the ability to service multiple locations, its reliability/availability, automatic protection switching, and restoration SDH STM-1 STM-16

© 2006 EMC Corporation. All rights reserved. Business Continuity – Remote Replication - 26 Rated Bandwidth Link Bandwidth Mb/s Escon 200 Fibre Channel 1024 or 2048 Gigabit Ethernet 1024 T1 1.5 T3 45 E1 2 E3 34 OC OC3/STM OC12/STM OC48/STM

© 2006 EMC Corporation. All rights reserved. Business Continuity – Remote Replication - 27 Module Summary Key points covered in this module:  Remote Replication Concepts – Synchronous/Asynchronous – Connectivity Options  Host and Array based Remote Replication Technologies – Functionality – Differences – Considerations – Selecting the appropriate technology

© 2006 EMC Corporation. All rights reserved. Business Continuity – Remote Replication - 28 Check Your Knowledge  What is a Remote Replica?  What are the possible uses of Remote Replicas?  What is the difference between Synchronous and Asynchronous Replication?  Discuss one host based remote replication technology?  Discuss one array based remote replication technology?  What are differences in the bandwidth requirements between the array remote replication technologies discussed in this module?

© 2006 EMC Corporation. All rights reserved. Business Continuity – Remote Replication - 29 Apply Your Knowledge… Upon completion of this topic, you will be able to:  Enumerate EMC’s Remote Replication Solutions for the Symmetrix and CLARiiON arrays  Describe EMC’s SRDF/Synchronous Replication Solution  Describe EMC’s MirrorView/A Replication Solution

© 2006 EMC Corporation. All rights reserved. Business Continuity – Remote Replication - 30 EMC – Remote Replication Solutions  EMC Symmetrix Arrays – EMC SRDF/Synchronous – EMC SRDF/Asynchronous – EMC SRDF/Automated Replication  EMC CLARiiON Arrays – EMC MirrorView/Synchronous – EMC MirrorView/Asynchronous

© 2006 EMC Corporation. All rights reserved. Business Continuity – Remote Replication - 31 EMC SRDF/Synchronous - Introduction  Array based Synchronous Remote Replication technology for EMC Symmetrix Storage Arrays – Facility for maintaining real-time physically separate mirrors of selected volumes  SRDF/Synchronous uses special Symmetrix devices – Source arrays have SRDF R1 devices – Target arrays have SRDF R2 devices – Data written to R1 devices are replicated to R2 devices  SRDF uses dedicated channels to send data from source to target array – ESCON, Fibre Channel or Gigabit Ethernet are supported  SRDF is available in both Open Systems and Mainframe environments

© 2006 EMC Corporation. All rights reserved. Business Continuity – Remote Replication - 32 SRDF Source and Target Volumes  SRDF R1 and R2 Volumes can have any local RAID Protection – E.g. Volumes could have RAID-1 or RAID-5 protection  SRDF R2 volumes are in a Read Only state when remote replication is in effect – Changes cannot be made to the R2 volumes  SRDF R2 volumes are accessed under certain circumstances – Failover – Invoked when the primary volumes become unavailable – Split – Invoked when the R2 volumes need to be concurrently accessed for BC operations

© 2006 EMC Corporation. All rights reserved. Business Continuity – Remote Replication - 33 Global Cache Director Disk Director (DD) Channel Director (CD) Disk Director (DD) Remote Link Director (RLD) Symmetrix Containing Target (R2) Volumes Target Host 3 2 Global Cache Director Disk Director (DD) Channel Director (CD) Disk Director (DD) Remote Link Director (RLD) Symmetrix Containing Source (R1) Volumes Source Host Write received by Symmetrix containing Source volume SRDF/Synchronous  Application does not receive I/O acknowledgement until data is received and acknowledged by remote Symmetrix  Write completion time is extended - No impact on Reads  Most often used in campus solutions 4. Write complete sent to host 3. Target Symmetrix sends acknowledgement to Source 2. Source Symmetrix sends write data to Target

© 2006 EMC Corporation. All rights reserved. Business Continuity – Remote Replication - 34 SRDF Operations - Failover  Purpose – Make Target Volumes Read Write  Source Volume status is changed to Read Only  SRDF Link is suspended After RWRO Source Volume Target Volume RORW Source Volume Target Volume Before

© 2006 EMC Corporation. All rights reserved. Business Continuity – Remote Replication - 35 SRDF Operations - Failback  Makes target volume Read Only, resumes link, synchronize R2 to R1, and write enables source volume RWRO Source Volume Target Volume Before After RORW Source Volume Target Volume sync

© 2006 EMC Corporation. All rights reserved. Business Continuity – Remote Replication - 36 SRDF Operations - Split  Enables read and write operations on both source and target volumes  Suspends replication RORW Source Volume Target Volume Before After RW Source Volume Target Volume

© 2006 EMC Corporation. All rights reserved. Business Continuity – Remote Replication - 37 SRDF Operations – Establish/Restore  Establish - Resume SRDF operation retaining data from source and overwriting any changed data on target  Restore - SRDF operation retaining data on target and overwriting any changed data on source RORW Source Volume Target Volume Establish RORW Source Volume Target Volume Restore

© 2006 EMC Corporation. All rights reserved. Business Continuity – Remote Replication - 38 EMC CLARiiON MirrorView/A Overview  Optional storage system software for remote replication on EMC CLARiiON arrays – No host cycles used for data replication  Provides a remote image for disaster recovery – Remote image updated periodically - asynchronously – Remote image cannot be accessed by hosts while replication is active – Snapshot of mirrored data can be host-accessible at remote site  Mirror topology (connecting primary array to secondary arrays) – Direct connect and switched FC topology supported – WAN connectivity supported using specialized hardware

© 2006 EMC Corporation. All rights reserved. Business Continuity – Remote Replication - 39 MirrorView/A Terms  Primary storage system – Holds the local image for a given mirror  Secondary storage system – Holds the remote image for a given mirror  Bidirectional mirroring – A storage system can hold local and remote images  Mirror Synchronization – Process that copies data from local image to remote image  MirrorView Fractured state – Condition when a Secondary storage system is unreachable by the Primary storage system

© 2006 EMC Corporation. All rights reserved. Business Continuity – Remote Replication - 40 MirrorView/A Configuration  MirrorView/A Setup – MirrorView/A software must be loaded on both Primary and Secondary storage system – Remote LUN must be exactly the same size as local LUN – Secondary LUN does not need to be the same RAID type as Primary – Reserved LUN Pool space must be configured  Management via Navisphere Manager and CLI

© 2006 EMC Corporation. All rights reserved. Business Continuity – Remote Replication - 41 AD C MirrorView/A – Initial Synchronization Host Primary Image Secondary Image Snapshot RLP ABEF E’B’ BCD E F E Tracking DeltaMap Transfer DeltaMap MAP

© 2006 EMC Corporation. All rights reserved. Business Continuity – Remote Replication - 42 AD C MirrorView/A – Update Host Primary Image Secondary Image Snapshot RLP AFE’B’BCD E F Tracking DeltaMap Transfer DeltaMap Tracking DeltaMap MAP A’ 1 0 B’ B E” E’ 1 E

© 2006 EMC Corporation. All rights reserved. Business Continuity – Remote Replication - 43 AD C MirrorView/A –Promotion (Update Failure) Host Primary Image Secondary Image Snapshot RLP F’E’B’CD E F Transfer DeltaMap Tracking DeltaMap MAP A’ 1 0 B’ B E” E’ 1 B Promote Secondary Primary Image

© 2006 EMC Corporation. All rights reserved. Business Continuity – Remote Replication - 44 Consistency Groups  Group of secondary images treated as a unit  Local LUNs must all be on the same CLARiiON  Remote LUNs must all be on the same CLARiiON  Operations happen on all LUNs at the same time – Ensures a restartable image group