Download presentation
Presentation is loading. Please wait.
Published byTrevor Ross Modified over 9 years ago
1
Section 3 : Business Continuity Lecture 29
2
After completing this chapter you will be able to: Discuss local replication and the possible uses of local replicas Explain consistency considerations when replicating file systems and databases Discuss host and array based replication technologies ◦ Functionality ◦ Differences ◦ Considerations ◦ Selecting the appropriate technology
3
Upon completion of this lesson, you will be able to: Define local replication Discuss the possible uses of local replicas Explain replica considerations such as Recoverability and Consistency Describe how consistency is ensured in file system and database replication Explain Dependent write principle
4
Replica - An exact copy Replication - The process of reproducing data Local replication - Replicating data within the same array or the same data center SourceReplica (Target) REPLICATION
5
Alternate source for backup ◦ An alternative to doing backup on production volumes Fast recovery ◦ Provide minimal RTO (recovery time objective) Decision support ◦ Use replicas to run decision support operations such as creating a report ◦ Reduce burden on production volumes Testing platform ◦ To test critical business data or applications Data Migration ◦ Use replicas to do data migration instead of production volumes
6
Types of Replica: choice of replica tie back into RPO (recovery point objective) ◦ Point-in-Time (PIT) non zero RPO ◦ Continuous near zero RPO What makes a replica good ◦ Recoverability/Re-startability Replica should be able to restore data on the source device Restart business operation from replica ◦ Consistency Ensuring consistency is primary requirement for all the replication technologies
7
Ensure data buffered in the host is properly captured on the disk when replica is created ◦ Data is buffered in the host before written to disk Consistency is required to ensure the usability of replica Consistency can be achieved in various ways: ◦ For file Systems Offline: Un-mount file system Online: Flush host buffers ◦ For Databases Offline: Shutdown database Online: Database in hot backup mode Dependent Write I/O Principle By Holding I/Os
8
File System Application Memory Buffers Logical Volume Manager Physical Disk Driver Data Sync Daemon SourceReplica
9
Dependent Write: A write I/O that will not be issued by an application until a prior related write I/O has completed ◦ A logical dependency, not a time dependency Inherent in all Database Management Systems (DBMS) ◦ e.g. Page (data) write is dependent write I/O based on a successful log write Necessary for protection against local outages ◦ Power failures create a dependent write consistent image ◦ A Restart transforms the dependent write consistent to transitionally consistent i.e. Committed transactions will be recovered, in-flight transactions will be discarded
10
Inconsistent Consistent SourceReplica 44 33 22 11 SourceReplica 44 33 2 1
11
5 Source Replica Consistent 44 33 22 11 5
12
Key points covered in this lesson: Possible uses of local replicas ◦ Alternate source for backup ◦ Fast recovery ◦ Decision support ◦ Testing platform ◦ Data Migration Recoverability and Consistency File system and database replication consistency Dependent write I/O principle
13
Upon completion of this lesson, you will be able to: Discuss Host and Array based local replication technologies ◦ Options ◦ Operation ◦ Comparison
14
Host based ◦ Logical Volume Manager (LVM) based replication (LVM mirroring) ◦ File System Snapshot Storage Array based ◦ Full volume mirroring ◦ Pointer based full volume replication ◦ Pointer based virtual replication
15
Host Logical Volume Physical Volume 1 Physical Volume 2
16
LVM based replicas add overhead on host CPUs ◦ Each write is translated into two writes on the disk ◦ Can degrade application performance If host volumes are already storage array LUNs then the added redundancy provided by LVM mirroring is unnecessary ◦ The devices will have some RAID protection already Both replica and source are stored within the same volume group ◦ Replica cannot be accessed by another host ◦ If server fails, both source and replica would be unavailable Keeping track of changes on the mirrors is a challenge
17
Pointer-based replica ◦ Uses Copy on First Write principle ◦ Uses bitmap and block map Bitmap: Used to track blocks that have changed on the production/source FS after creation of snap – initially all zero Block map: Used to indicate block address from which data is to be read when the data is accessed from the Snap FS – initially points to production/source FS ◦ Requires a fraction of the space used by the original FS ◦ Implemented by either FS itself or by LVM
18
Metadata Write to Production FS Prod FS Metadata 1 Data a 2 Data b Snap FS 1 Nodata 3 no data 4 no data BitBLK 1-0 2-0 N Data N New writes 3 Data C 2 no data c 2 Data c 3-0 3-2 4 Data dD 1 no data 1 Data d 4-0 4-1 3-1 3-0 4-1 4-0
19
Reads from snap FS ◦ Consult the bitmap If 0 then direct read to the production FS If 1 then go to the block map get the block address and read data from that address Metadata Snap FS 1 Nodata 2 Data c 3 no data 4 no data BitBLK 1-0 2-0 3-2 4-1 2-0 3-1 4-1 1 Data d Prod FS Metadata 1 Data a 2 Data b 3 Data C 4 Data D N Data N
20
Replication performed by the Array Operating Environment Replicas are on the same array Types of array based replication ◦ Full-volume mirroring ◦ Pointer-based full-volume replication ◦ Pointer-based virtual replication Production Server BC Server Array Source Replica
21
Target is a full physical copy of the source device Target is attached to the source and data from source is copied to the target Target is unavailable while it is attached Target device is as large as the source device Good for full backup, decision support, development, testing and restore to last PIT Source Target Attached Array Read/WriteNot Ready
22
After synchronization, target can be detached from the source and made available for BC (business continuity) operations PIT is determined by the time of detachment After detachment, re-synchronization can be incremental ◦ Only updated blocks are resynchronized ◦ Modified blocks are tracked using bitmaps Source Target Detached - PIT Read/Write Array
23
Attached/ Synchronization Source = Target Detached Source ≠ Target Resynchronization Source = Target
24
Provide full copy of source data on the target Target device is made accessible for business operation as soon as the replication session is started Point-in-Time is determined by time of session activation Two modes ◦ Copy on First Access (deferred) ◦ Full Copy mode Target device is at least as large as the source device
25
Write to Source Source Target Read/Write Write to Target Read from Target Source Target Source Target Read/Write
26
On session start, the entire contents of the Source device is copied to the Target device in the background If the replication session is terminated, the target will contain all the original data from the source at the PIT of activation ◦ Target can be used for restore and recovery ◦ In CoFA mode, the target will only have data was accessed until termination, and therefore it cannot be used for restore and recovery Most vendor implementations provide the ability to track changes: ◦ Made to the Source or Target ◦ Enables incremental re-synchronization
27
Targets do not hold actual data, but hold pointers to where the data is located ◦ Target requires a small fraction of the size of the source volumes A replication session is setup between source and target devices ◦ Target devices are accessible immediately when the session is started ◦ At the start of the session the target device holds pointers to data on source device Typically recommended if the changes to the source are less than 30%
28
Source Save Location Target Virtual Device
29
Changes will/can occur to the Source/Target devices after PIT has been created How and at what level of granularity should this be tracked ◦ Too expensive to track changes at a bit by bit level Would require an equivalent amount of storage to keep track ◦ Based on the vendor some level of granularity is chosen and a bit map is created (one for source and one for target) For example one could choose 32 KB as the granularity If any change is made to any bit on one 32KB chunk the whole chunk is flagged as changed in the bit map For 1GB device, map would only take up 32768/8/1024 = 4KB space
30
Source Target 0= unchanged= changed Logical OR At PIT Target Source After PIT… 00000000 00000000 10010100 00110001 10110101 1 For resynchronization/restore
31
Source has a failure ◦ Logical Corruption ◦ Physical failure of source devices ◦ Failure of Production server Solution ◦ Restore data from target to source The restore would typically be done incrementally Applications can be restarted even before synchronization is complete -----OR------ ◦ Start production on target Resolve issues with source while continuing operations on target After issue resolution restore latest data on target to source
32
Before a Restore ◦ Stop all access to the Source and Target devices ◦ Identify target to be used for restore Based on RPO and Data Consistency ◦ Perform Restore Before starting production on target ◦ Stop all access to the Source and Target devices ◦ Identify Target to be used for restart Based on RPO and Data Consistency ◦ Create a “Gold” copy of Target As a precaution against further failures ◦ Start production on Target
33
Pointer-based full volume replicas ◦ Restores can be performed to either the original source device or to any other device of like size Restores to the original source could be incremental in nature Restore to a new device would involve a full synchronization Pointer-based virtual replicas ◦ Restores can be performed to the original source or to any other device of like size as long as the original source device is healthy Target only has pointers Pointers to source for data that has not been written to after PIT Pointers to the “save” location for data was written after PIT Thus to perform a restore to an alternate volume the source must be healthy to access data that has not yet been copied over to the target
34
FactorFull-volume mirroring Pointer-based full-volume replication Pointer-based virtual replication Performance impact on source No impact CoFA mode – some impact Full copy – no impact High impact Size of target At least same as the source Small fraction of the source Accessibility of source for restoration Not required CoFA mode – required Full copy – not required Required Accessibility to target Only after synchronization and detachment from the source Immediately accessible
35
06:00 A.M. : 12 : 01 : 02 : 03 : 04 : 05 : 06 : 07 : 08 : 09 : 10 : 11 : 12 : 01 : 02 : 03 : 04 : 05 : 06 : 07 : 08 : 09 : 10 : 11 : P.M.A.M. 12:00 P.M. 06:00 P.M. 12:00 A.M. Source Target Devices Point-In-Time
36
Replication management software residing on storage array Provides an interface for easy and reliable replication management Two types of interface: ◦ CLI ◦ GUI
37
Key points covered in this lesson: Replication technologies ◦ Host based LVM based mirroring File system snapshot ◦ Array based Full volume mirroring Pointer-based full volume copy Pointer-based virtual replica
38
Key points covered in this chapter: Definition and possible use of local replicas Consistency considerations when replicating file systems and databases Host based replication ◦ LVM based mirroring, File System Snapshot Storage array based replication ◦ Full volume mirroring, Pointer based full volume and virtual replication ◦ Choice of technology Additional Task Research on EMC Replication Products
39
Describe the uses of a local replica in various business operations. How can consistency be ensured when replicating a database? What are the differences among full volume mirroring and pointer based replicas? What is the key difference between full copy mode and deferred mode? What are the considerations when performing restore operations for each array replication technology?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.