Module – 11 Local Replication

Slides:



Advertisements
Similar presentations
Manage Your Risk with Business Continuity and Disaster Recovery
Advertisements

Manage Your Risk with Business Continuity and Disaster Recovery
© 2006 DataCore Software Corp DataCore Traveller Travel in Time : Do More with Time The Continuous Protection and Recovery (CPR) Solution Time Optimized.
© 2006 DataCore Software Corp SANmotion New: Simple and Painless Data Migration for Windows Systems Note: Must be displayed using PowerPoint Slideshow.
Virtualisation From the Bottom Up From storage to application.
Copyright © 2014 EMC Corporation. All Rights Reserved. VNX Snapshot Upon completion of this module, you should be able to: Describe VNX Snapshot operations.
Module – 9 Introduction to Business continuity
1© Copyright 2013 EMC Corporation. All rights reserved. EMC RECOVERPOINT FAMILY Protecting Your Data.
Recovery CPSC 356 Database Ellen Walker Hiram College (Includes figures from Database Systems by Connolly & Begg, © Addison Wesley 2002)
CSCI 3140 Module 8 – Database Recovery Theodore Chiasson Dalhousie University.
© 2009 EMC Corporation. All rights reserved. Introduction to Business Continuity Module 3.1.
1 Disk Based Disaster Recovery & Data Replication Solutions Gavin Cole Storage Consultant SEE.
1 © Copyright 2010 EMC Corporation. All rights reserved. EMC RecoverPoint/Cluster Enabler for Microsoft Failover Cluster.
Module – 11 Local Replication
Multiple Replicas. Remote Replication. DR in practice.
Module – 12 Remote Replication
Section 3 : Business Continuity Lecture 29. After completing this chapter you will be able to:  Discuss local replication and the possible uses of local.
1© Copyright 2011 EMC Corporation. All rights reserved. EMC RECOVERPOINT/ CLUSTER ENABLER FOR MICROSOFT FAILOVER CLUSTER.
Oracle9i Database Administrator: Implementation and Administration
Using RMAN to Perform Recovery
Remote Replication Chapter 14(9.3) ISMDR:BEIT:VIII:chap9.3:Madhu N:PIIT1.
Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.
© 2006 EMC Corporation. All rights reserved. Business Continuity: Remote Replication Module 4.4.
DotHill Systems Data Management Services. Page 2 Agenda Why protect your data?  Causes of data loss  Hardware data protection  DMS data protection.
Module 9: Configuring Storage
Chapter 8 Implementing Disaster Recovery and High Availability Hands-On Virtual Computing.
Copyright © 2011 EMC Corporation. All Rights Reserved. MODULE – 6 VIRTUALIZED DATA CENTER – DESKTOP AND APPLICATION 1.
16 Copyright © 2007, Oracle. All rights reserved. Performing Database Recovery.
Module – 4 Intelligent storage system
Copyright © 2014 EMC Corporation. All Rights Reserved. VNX Block Local Replication Principles Upon completion of this module, you should be able to: Explain.
Module 9 Planning a Disaster Recovery Solution. Module Overview Planning for Disaster Mitigation Planning Exchange Server Backup Planning Exchange Server.
15 Copyright © 2007, Oracle. All rights reserved. Performing Database Backups.
Copyright © 2014 EMC Corporation. All Rights Reserved. SnapView Snapshot Upon completion of this module, you should be able to: Describe SnapView Snapshot.
Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.
© 2006 EMC Corporation. All rights reserved. Business Continuity: Local Replication Module 4.3.
14 Copyright © 2005, Oracle. All rights reserved. Backup and Recovery Concepts.
© 2009 EMC Corporation. All rights reserved. Remote Replication Module 3.4.
SnapView Clones Upon completion of this module, you should be able to:
VMware vSphere Configuration and Management v6
High Availability in DB2 Nishant Sinha
© 2006 EMC Corporation. All rights reserved. The Host Environment Module 2.1.
18 Copyright © 2004, Oracle. All rights reserved. Backup and Recovery Concepts.
18 Copyright © 2004, Oracle. All rights reserved. Recovery Concepts.
14 Copyright © 2005, Oracle. All rights reserved. Backup and Recovery Concepts.
Virtual Machine Movement and Hyper-V Replica
© 2009 IBM Corporation Statements of IBM future plans and directions are provided for information purposes only. Plans and direction are subject to change.
14 Copyright © 2007, Oracle. All rights reserved. Backup and Recovery Concepts.
Database Recovery Techniques
Metro Mirror, Global Copy, and Global Mirror Quick Reference
Module – 9 Introduction to Business continuity
Data Center Infrastructure
Determining BC/DR Methods
Fujitsu Training Documentation RAID Groups and Volumes
Backup and Recovery (1) Oracle 10g Hebah ElGibreen CAP364.
Direct Attached Storage and Introduction to SCSI
Maximum Availability Architecture Enterprise Technology Centre.
Local Replication – Business Copy
Storage Virtualization
Introduction of Week 6 Assignment Discussion
Direct Attached Storage and Introduction to SCSI
Disaster Recovery Services
Introduction to Operating Systems
RAID RAID Mukesh N Tekwani
Oracle9i Database Administrator: Implementation and Administration
SpiraTest/Plan/Team Deployment Considerations
Overview Continuation from Monday (File system implementation)
UNIT IV RAID.
Using the Cloud for Backup, Archiving & Disaster Recovery
RAID RAID Mukesh N Tekwani April 23, 2019
Chapter 5 The Redo Log Files.
Presentation transcript:

Module – 11 Local Replication Copyright © 2012 EMC Corporation. All Rights Reserved. Module 11: Local Replication

Module 11: Local Replication Upon completion of this module, you should be able to: Describe various uses of local replica Describe how consistency is ensured in file system and database replication Describe host-based, array-based, and network-based local replication technologies Explain restore and restart considerations Describe local replication in virtualized environment Module 11: Local Replication Copyright © 2012 EMC Corporation. All Rights Reserved. Module 11: Local Replication

Module 11: Local Replication Lesson 1: Local Replication Overview During this lesson the following topics are covered: Uses of local replica File system and database consistency Module 11: Local Replication Copyright © 2012 EMC Corporation. All Rights Reserved. Module 11: Local Replication

Replication is one of the ways to ensure BC Overview It is vital for an organization to protect mission-critical data and minimize the risk of business disruption If a local outage or disaster occurs, fast data restore and restart is essential to ensure business continuity (BC) Replication is one of the ways to ensure BC Module 11: Local Replication

What is Replication? Replication can be classified as   It is a process of creating an exact copy (replica) of data. Replication can be classified as Local replication Replicating data within the same array or data center Remote replication Replicating data at remote site REPLICATION Source Replica (Target) Module 11: Local Replication Copyright © 2012 EMC Corporation. All Rights Reserved. Module 11: Local Replication

Alternate source for backup Fast recovery Uses of Local Replica Alternate source for backup Production device burdened with simultaneously involved in production operations and servicing data for backup operations Local replica contains exact point-in-time (PIT) copy of source data, can be used for backup Fast recovery If data loss / corrupt on the source, local replica can be used to recover the lost / corrupted data Decision support activities Reporting  Reduce I/O burden on the production device Testing platform If test is successful, the upgrade can be implemented on the production Data Migration From smaller capacity to larger capacity Module 11: Local Replication Copyright © 2012 EMC Corporation. All Rights Reserved. Module 11: Local Replication

Replica Characteristics Recoverability /Restartability Replica should be able to restore data on the source device Restart business operation from replica Consistency Replica must be consistent with the source - usable for both recovery and restart Point-in-Time (PIT) Data on replica is an identical image of the production at some specific timestamp  To minimize RPO, take periodic PITs E.g.: Monday 4.00 pm PIT copy Types of replica: Choice of replica tie back into RPO (Recovery Point Objective) Continuous Data on replica in in-sync with production data at all times  to reduce RPO to zero or near-zero Module 11: Local Replication Copyright © 2012 EMC Corporation. All Rights Reserved. Module 11: Local Replication

Understanding Consistency Consistency ensures the usability of replica Consistency can be achieved in various ways for file system and database before creating the replica Offline Online File System Un-mount file system Flushing host buffers Database Shutdown database Using dependent write I/O principle Holding I/Os to source before creating replica Module 11: Local Replication Copyright © 2012 EMC Corporation. All Rights Reserved. Module 11: Local Replication

File System Consistency: Flushing Host Buffer File systems buffer the data in the host memory to improve the application response time. The buffered data is periodically written to the disk In some cases, the replica is created between the set intervals, which might result in the creation of an inconsistent replica. File System Application Memory Buffers Logical Volume Manager Physical Disk Driver Data Flush Buffer Source Replica Therefore, host memory buffers must be flushed to ensure data consistency on the replica, prior to its creation Module 11: Local Replication Copyright © 2012 EMC Corporation. All Rights Reserved. Module 11: Local Replication

Database Consistency: Dependent Write I/O Principle Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved. Database Consistency: Dependent Write I/O Principle A consistent replica of an online database is created by using the dependent write I/O principle or by holding I/Os temporarily to the source before creating the replica Dependent Write: A write I/O will not be issued by an application until a prior related write I/O has completed A logical dependency, not a time dependency Inherent in all Database Management Systems (DBMS) e.g. Data write is dependent on the successful completion of the prior log write

Database Consistency: Dependent Write I/O Principle (contd.) Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved. Database Consistency: Dependent Write I/O Principle (contd.) Necessary for protection against local outages Power failures create a dependent write consistent image Restart transforms the dependent write consistent to transitionally consistent i.e. Committed transactions will be recovered, in-flight transactions will be discarded

Database Consistency: Dependent Write I/O Principle (contd.) Inconsistent 4 3 2 1 Source Replica I/O transactions 1 and 2 were not copied C Consistent Source Replica 4 3 2 1 I/O transactions 3 and 4 were copied to the replica devices If a restart were to be performed on the replica devices, I/O 4, which is available on the replica, might indicate that a particular transaction is complete, but all the data associated with the transaction will be unavailable on the replica, making the replica inconsistent. The I/O transactions 1 to 4 must be carried out for the data to be consistent on the replica Module 11: Local Replication Copyright © 2012 EMC Corporation. All Rights Reserved. Module 11: Local Replication

Database Consistency: Holding I/O Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved. Database Consistency: Holding I/O To make sure that the write I/O to all source devices is held for the duration of creating the replica. This creates a consistent image on the replica 5 Source Replica Consistent 4 3 2 1

Module 11: Local Replication Lesson 2: Local Replication Technologies During this lesson the following topics are covered: Local replication technologies Restore and restart considerations Module 11: Local Replication Copyright © 2012 EMC Corporation. All Rights Reserved. Module 11: Local Replication

Local Replication Technologies Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved. Local Replication Technologies Host based Logical Volume Manager (LVM) based replication (LVM mirroring) File System Snapshot Storage Array based Full volume mirroring Pointer based full volume replication Pointer based virtual replication

Host-based Replication: LVM-based Mirroring Logical Volume Physical Volume 1 Physical Volume 2 An application write to a logical partition is written to the two physical partitions by the LVM device driver. Advantages: LVM-based replication technology is not dependent on a vendor-specific storage system. LVM is part of the operating system, and no additional license is required to deploy LVM mirroring. Module 11: Local Replication Copyright © 2012 EMC Corporation. All Rights Reserved. Module 11: Local Replication

LVM-based Replication: Limitations Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved. LVM-based Replication: Limitations LVM based replicas add overhead on host CPUs Each write is translated into two writes on the disk Can degrade application performance If host volumes are already storage array LUNs then the added redundancy provided by LVM mirroring is unnecessary The devices will have some RAID protection already Both replica and source are stored within the same volume group Replica cannot be accessed by another host If server fails, both source and replica would be unavailable Keeping track of changes on the mirrors is a challenge

Host-based Replication: File System Snapshot File system (FS) snapshot is a pointer-based replica that requires a fraction of the space used by the production FS Uses Copy on First Write (CoFW) principle to create snapshot Snapshot created: bitmap and block map are created in the metadata of the Snap FS Metadata Production FS 1 Data a 2 Data b FS Snapshot 3 no data 4 no data BLK Bit 1-0 2-0 N Data N 3 Data C 2 Data c 3-1 4 Data D 1 Data d 4-1 3-2 Module 11: Local Replication Copyright © 2012 EMC Corporation. All Rights Reserved. Module 11: Local Replication

Host-based Replication: File System Snapshot (contd.) Bitmap: to keep track changes on the production FS after snap creation Block map: the exact address from which the data to be read CoFW mechanism: If a write I/O is issued to the production FS for the first time after the creation of a snapshot, the I/O is held and the original data of production FS corresponding to that location is moved to the Snap FS Then, the write is allowed to the production FS Bitmap and blockmap are updated accordingly Module 11: Local Replication

File System Snapshots – How it Works Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved. File System Snapshots – How it Works Metadata Snap FS 1 Nodata 2 Data c 3 no data 4 no data Bit BLK 1-0 2-0 3-2 4-1 3-1 1 Data d Prod FS 1 Data a 2 Data b 3 Data C 4 Data D N Data N Reads from snap FS Consult the bitmap If 0 then direct read to the production FS If 1 then go to the block map get the block address and read data from that address

Storage Array-based Local Replication Replication performed by the array operating environment Source and replica are on the same array Types of array-based replication Full-volume mirroring Pointer-based full-volume replication Pointer-based virtual replication Source Replica Production Host Storage Array BC Host Module 11: Local Replication Copyright © 2012 EMC Corporation. All Rights Reserved. Module 11: Local Replication

Full Volume Mirroring: Attached Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved. Full Volume Mirroring: Attached Target is a full physical copy of the source device Target is attached to the source and data from source is copied to the target Target is unavailable while it is attached Target device is as large as the source device Good for full backup, decision support, development, testing and restore to last PIT Source Attached Storage Array Read/Write Not Ready Production Host BC Host Target

Full Volume Mirroring: Detached Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved. Full Volume Mirroring: Detached After synchronization, target can be detached from the source and made available for BC (business continuity) operations PIT is determined by the time of detachment After detachment, re-synchronization can be incremental Only updated blocks are resynchronized Modified blocks are tracked using bitmaps Read/Write Source Storage Array Production Host BC Host Target Detached - PIT

Full Volume Mirroring: Source and Target Relationship Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved. Full Volume Mirroring: Source and Target Relationship Attached/ Synchronization Source = Target Detached Source ≠ Target Resynchronization

Pointer-based Full-Volume Replication Provides full copy of source data on the target Target device is immediately accessible by the BC host after the replication session is activated PIT is determined by time of session activation Size of target device is at least as large as the source device Two modes Full copy mode After session starts, all the data from source is copied to the target in the background Copy on First Access (deferred) Module 11: Local Replication Copyright © 2012 EMC Corporation. All Rights Reserved. Module 11: Local Replication

Copy on First Access: Write to the Source B C’ C C’ C Source Target Production Host BC Host When a write is issued to the source for the first time after replication session activation: Original data at that address is copied to the target Then the new data is updated on the source This ensures that original data at the point-in-time of activation is preserved on the target Module 11: Local Replication Copyright © 2012 EMC Corporation. All Rights Reserved. Module 11: Local Replication

Copy on First Access: Write to the Target B C’ B B’ B’ C Source Target Production Host BC Host When a write is issued to the target for the first time after replication session activation: The original data is copied from the source to the target Then the new data is updated on the target Module 11: Local Replication Copyright © 2012 EMC Corporation. All Rights Reserved. Module 11: Local Replication

Copy on First Access: Read from Target request for data “A” A B C’ A A A B’ C Source Target Production Host BC Host When a read is issued to the target for the first time after replication session activation: The original data is copied from the source to the target and is made available to the BC host Module 11: Local Replication Copyright © 2012 EMC Corporation. All Rights Reserved. Module 11: Local Replication

Pointer-based Virtual Replication Targets do not hold data, but hold pointers to where the data is located At the start of the session the target device holds pointers to data on source device Target requires a small fraction of the size of the source volumes Advantage: Less storage space is required for creating replica Target devices are accessible immediately when the session is started Uses CoFW principle This method is recommended, if the changes to the source are typically less than 30% Module 11: Local Replication Copyright © 2012 EMC Corporation. All Rights Reserved. Module 11: Local Replication

Pointer-based Virtual Replication (CoFW): Write to Source Save Location Source Target Virtual Device A B C C’ C’ Write to Source C When a write is issued to the source for the first time after replication session activation: Original data at that address is copied to save location The pointer in the target is updated to point to this data in the save location Finally, the new write is updated on the source Module 11: Local Replication Copyright © 2012 EMC Corporation. All Rights Reserved. Module 11: Local Replication

Pointer-based Virtual Replication (CoFW): Write to Target Save Location Source Target Virtual Device A A’ A B A C’ C When a write is issued to the target for the first time after replication session activation: Original data from the source device is copied to the save location The pointer is updated to the data in save location Another copy of the original data is created in the save location before the new write is updated on the save location Module 11: Local Replication Copyright © 2012 EMC Corporation. All Rights Reserved. Module 11: Local Replication

Tracking Changes to Source and Target The bits in the source and target bitmaps are all set to 0 (zero) when the replica is created. Source Target unchanged changed Logical OR At PIT After PIT… 1 For resynchronization/restore Any changes to the source or replica are then flagged by setting the appropriate bits to 1 in the bitmap. When resynchronization or restore is required, a logical OR operation between the source bitmap and the target bitmap is performed. Module 11: Local Replication Copyright © 2012 EMC Corporation. All Rights Reserved. Module 11: Local Replication

Restore and Restart Considerations Source has a failure Logical corruption or physical failure of source devices Solution Before starting production on target Before a Restore Stop all access to the Source and Target devices Identify Target to be used for restart Based on RPO and Data Consistency Create a “Gold” copy of Target As a precaution against further failures Start production on Target Stop all access to the Source and Target devices Identify target to be used for restore Based on RPO and Data Consistency Perform Restore Module 11: Local Replication Copyright © 2012 EMC Corporation. All Rights Reserved. Module 11: Local Replication

Restore and Restart Operation Considerations (contd.) Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved. Restore and Restart Operation Considerations (contd.) Full volume replicas (mirror & pointer=based) Full Copy mode Pointer-based virtual replicas & full-volume in CoFA mode Restores can be performed to either the original source device or new set of source device Restores to the original source could be incremental in nature Restore to a new device would involve a full synchronization Access to data on the replica is dependent on the health and accessibility of the source volumes If the source volume is inaccessible for any reason, these replicas cannot be used for a restore or a restart operation

Comparison of Local Replication Technologies Factor Full-Volume Mirroring Pointer-based Full-Volume Replication Pointer-based Virtual Replication Performance impact on source due to replica No impact Full copy mode – no impact CoFA mode – some impact High impact Size of target At least the same as the source Small fraction of the source Availability of source for restoration Not required Full copy mode – not required CoFA mode – required Required Accessibility to target Only after synchronization and detachment from the source Immediately accessible Module 11: Local Replication Copyright © 2012 EMC Corporation. All Rights Reserved. Module 11: Local Replication

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved. Creating Multiple Replicas 06:00 A.M. : 12 : 01 : 02 : 03 : 04 : 05 : 06 : 07 : 08 : 09 : 10 : 11 : 12 : 01 : 02 : 03 : 04 : 05 : 06 : 07 : 08 : 09 : 10 : 11 : P.M. A.M. 12:00 P.M. 06:00 P.M. 12:00 A.M. Source Target Devices Replica 2 Replica 3 Replica 4 Replica 1 Point-In-Time A copy is created every 6 hours from the same source If the source is corrupted, data can be restored from the latest PIT copy

Local Replication Management: Array Based Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved. Local Replication Management: Array Based Replication management software residing on storage array Provides an interface for easy and reliable replication management Two types of interface: CLI GUI

Network-based Local Replication: Continuous Data Protection (CDP) Replication occurs at the network layer between the hosts and storage arrays Ideal for highly heterogeneous (diverse) environment Typically provides the ability to restore data to any previous point-in-time RPOs are random and do not need to be defined in advance Data changes are continuously captured and stored in a separate location from the production data CDP is implemented by using: Journal volume CDP appliance Write splitter Module 11: Local Replication Copyright © 2012 EMC Corporation. All Rights Reserved. Module 11: Local Replication

CDP Local Replication Operation Before the start of replication, the replica is synchronized with the source and then the replication process starts. After the replication starts, all the writes to the source are split into two copies. One copy is sent to the CDP appliance Another copy to the production volume. When the CDP appliance receives a copy of a write, it is written to the journal volume along with its timestamp. Next, data from the journal volume is sent to the replica at predefined intervals. While recovering data to the source, the CDP appliance restores the data from the replica and applies journal entries up to the point in time chosen for recovery. SAN Host Production Volume CDP Appliance Replica CDP Journal Write Splitter Storage Array Module 11: Local Replication Copyright © 2012 EMC Corporation. All Rights Reserved. Module 11: Local Replication

Module 11: Local Replication Lesson 3: Local Replication in Virtualized Environment During this lesson the following topics are covered: Mirroring of a virtual volume Replication of virtual machines Module 11: Local Replication Copyright © 2012 EMC Corporation. All Rights Reserved. Module 11: Local Replication

Local Replication in Virtualized Environment Local replication (mirroring) of a virtual volume assigned to a host Mirroring is performed by a virtualization appliance Replication of virtual machines VM snapshot VM clone The virtualization appliance at SAN abstracts the LUNs and create virtual volumes. These virtual volumes are presented to the host. The virtualization appliance has ability to mirror data of a virtual volume between the LUNs. Module 11: Local Replication Copyright © 2012 EMC Corporation. All Rights Reserved. Module 11: Local Replication

Local Replication of Virtual Volume Mirrored Virtual Volume Storage Array Mirror Legs Virtualization Appliance SAN LUN Host Storage Pool I/Os Data Center Illustration of a virtual volume that is mirrored between arrays within a data center Each I/O to the virtual volume that is mirrored to the underlying LUNs on the arrays. If one of the arrays incurs an outage, the virtualization appliance will be able to continue processing I/O on the surviving mirror leg. Upon restoration of the failed storage array, the data from the surviving LUN is resynchronized to the recovered leg. Module 11: Local Replication Copyright © 2012 EMC Corporation. All Rights Reserved. Module 11: Local Replication

Captures the state and data of a running VM at a specific PIT VM Snapshot Captures the state and data of a running VM at a specific PIT Uses a separate delta file to record all the changes to the virtual disk since the snapshot session is activated Restores all settings configured in a guest OS to the PIT Module 11: Local Replication Copyright © 2012 EMC Corporation. All Rights Reserved. Module 11: Local Replication

An identical copy of an existing VM VM Clone An identical copy of an existing VM Clones are created for different use such as testing Changes made to a clone VM do not affect the parent VM and vice versa Clone VM is assigned a separate network identity Clone has its own separate MAC address Useful when multiple identical VMs need to deploy Module 11: Local Replication Copyright © 2012 EMC Corporation. All Rights Reserved. Module 11: Local Replication

Module 11: Local Replication Concept in Practice EMC SnapView EMC TimeFinder EMC RecoverPoint Module 11: Local Replication Copyright © 2012 EMC Corporation. All Rights Reserved. Module 11: Local Replication

Logical point-in-time view EMC SnapView SnapView Snapshot Logical view of the production volume Uses CoFW principle SnapView Clone Full volume copies that require same disk space as the source Becomes a PIT copy once the clone is fractured from the source Full image copy Clone Source LUN Snap Snap Logical point-in-time view Module 11: Local Replication Copyright © 2012 EMC Corporation. All Rights Reserved. Module 11: Local Replication

EMC TimeFinder TimeFinder/Snap TimeFinder/Clone Creates space-saving, logical PIT (snapshots) Allows creating multiple snapshots from a single source TimeFinder/Clone Creates PIT copy of the source volume Uses pointer-based full-volume replication technology Allows creating multiple clones from a single source device Module 11: Local Replication Copyright © 2012 EMC Corporation. All Rights Reserved. Module 11: Local Replication

EMC RecoverPoint Provides continuous data protection and recovery to any PIT Uses splitting technology at server, fabric, or array to mirror a write to a RecoverPoint appliance Provides automatic RecoverPoint appliance failover Family of product includes RecoverPoint/CL RecoverPoint/EX RecoverPoint/SE Module 11: Local Replication Copyright © 2012 EMC Corporation. All Rights Reserved. Module 11: Local Replication

Module 11: Summary Key points covered in this module: Uses of local replicas Consistency in file system and database replication Host-based, storage array-based, and network-based replication Restore and restart considerations Local replication of a virtual volume VM snapshot and VM clone Module 11: Local Replication Copyright © 2012 EMC Corporation. All Rights Reserved. Module 11: Local Replication