Section 3 : Business Continuity Lecture 29. After completing this chapter you will be able to:  Discuss local replication and the possible uses of local.

Slides:



Advertisements
Similar presentations
Lectures on File Management
Advertisements

Chapter 16: Recovery System
Copyright © 2014 EMC Corporation. All Rights Reserved. VNX Snapshot Upon completion of this module, you should be able to: Describe VNX Snapshot operations.
Module – 9 Introduction to Business continuity
Business Continuity Section 3(chapter 8) BC:ISMDR:BEIT:VIII:chap8:Madhu N PIIT1.
Chapter 3 Presented by: Anupam Mittal.  Data protection: Concept of RAID and its Components Data Protection: RAID - 2.
1 CSIS 7102 Spring 2004 Lecture 8: Recovery (overview) Dr. King-Ip Lin.
Recovery CPSC 356 Database Ellen Walker Hiram College (Includes figures from Database Systems by Connolly & Begg, © Addison Wesley 2002)
CSCI 3140 Module 8 – Database Recovery Theodore Chiasson Dalhousie University.
© 2009 EMC Corporation. All rights reserved. Introduction to Business Continuity Module 3.1.
1 Disk Based Disaster Recovery & Data Replication Solutions Gavin Cole Storage Consultant SEE.
Oracle Data Guard Ensuring Disaster Recovery for Enterprise Data
1 Cheriton School of Computer Science 2 Department of Computer Science RemusDB: Transparent High Availability for Database Systems Umar Farooq Minhas 1,
1 © Copyright 2010 EMC Corporation. All rights reserved. EMC RecoverPoint/Cluster Enabler for Microsoft Failover Cluster.
6/5/ TRAP-Array: A Disk Array Architecture Providing Timely Recovery to Any Point-in-time Authors: Qing Yang,Weijun Xiao,Jin Ren University of Rhode.
1 Choosing Disaster Recovery Solution for Database Systems EECS711 : Security Management and Audit Spring 2010 Presenter : Amit Dandekar Instructor : Dr.
Module – 11 Local Replication
Multiple Replicas. Remote Replication. DR in practice.
Module – 12 Remote Replication
1© Copyright 2011 EMC Corporation. All rights reserved. EMC RECOVERPOINT/ CLUSTER ENABLER FOR MICROSOFT FAILOVER CLUSTER.
Backup and Recovery Part 1.
Oracle9i Database Administrator: Implementation and Administration
Backup Concepts. Introduction Backup and recovery procedures protect your database against data loss and reconstruct the data, should loss occur. The.
Module 8 Implementing Backup and Recovery. Module Overview Planning Backup and Recovery Backing Up Exchange Server 2010 Restoring Exchange Server 2010.
Remote Replication Chapter 14(9.3) ISMDR:BEIT:VIII:chap9.3:Madhu N:PIIT1.
Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.
ICOM 6005 – Database Management Systems Design Dr. Manuel Rodríguez-Martínez Electrical and Computer Engineering Department Lecture 6 – RAID ©Manuel Rodriguez.
Course 6425A Module 9: Implementing an Active Directory Domain Services Maintenance Plan Presentation: 55 minutes Lab: 75 minutes This module helps students.
© 2006 EMC Corporation. All rights reserved. Business Continuity: Remote Replication Module 4.4.
1 Chapter 12 File Management Systems. 2 Systems Architecture Chapter 12.
DotHill Systems Data Management Services. Page 2 Agenda Why protect your data?  Causes of data loss  Hardware data protection  DMS data protection.
Chapter 8 Implementing Disaster Recovery and High Availability Hands-On Virtual Computing.
Copyright © 2014 EMC Corporation. All Rights Reserved. VNX Block Local Replication Principles Upon completion of this module, you should be able to: Explain.
Module 9 Planning a Disaster Recovery Solution. Module Overview Planning for Disaster Mitigation Planning Exchange Server Backup Planning Exchange Server.
15 Copyright © 2007, Oracle. All rights reserved. Performing Database Backups.
Copyright © 2014 EMC Corporation. All Rights Reserved. SnapView Snapshot Upon completion of this module, you should be able to: Describe SnapView Snapshot.
AssuredCopy Quick Start. Dot Hill Systems NDA Material AssuredCopy - Basic Facts Licensed feature Takes advantage of our snapshot technology  But does.
Recovery Chapter 6.3 V3.1 Napier University Dr Gordon Russell.
CMPT 454, Simon Fraser University, Fall 2009, Martin Ester 294 Database Systems II Coping With System Failures.
Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.
File Management Chapter 12. File Management File management system is considered part of the operating system Input to applications is by means of a file.
Chapter 16 Recovery Yonsei University 1 st Semester, 2015 Sanghyun Park.
© 2006 EMC Corporation. All rights reserved. Business Continuity: Local Replication Module 4.3.
File System Implementation
14 Copyright © 2005, Oracle. All rights reserved. Backup and Recovery Concepts.
© 2009 EMC Corporation. All rights reserved. Remote Replication Module 3.4.
SnapView Clones Upon completion of this module, you should be able to:
Carnegie Mellon Carnegie Mellon Univ. Dept. of Computer Science Database Applications C. Faloutsos Recovery.
11.1 Silberschatz, Galvin and Gagne ©2005 Operating System Principles 11.5 Free-Space Management Bit vector (n blocks) … 012n-1 bit[i] =  1  block[i]
18 Copyright © 2004, Oracle. All rights reserved. Backup and Recovery Concepts.
Oracle Architecture - Structure. Oracle Architecture - Structure The Oracle Server architecture 1. Structures are well-defined objects that store the.
1 © 2002 hp Introduction to EVA Keith Parris Systems/Software Engineer HP Services Multivendor Systems Engineering Budapest, Hungary 23May 2003 Presentation.
14 Copyright © 2005, Oracle. All rights reserved. Backup and Recovery Concepts.
Virtual Machine Movement and Hyper-V Replica
© 2009 IBM Corporation Statements of IBM future plans and directions are provided for information purposes only. Plans and direction are subject to change.
Software Version: DSS ver up85 Presentation updated: September 2011 Step-by-Step Guide to Asynchronous Data (File) Replication (File Based) within.
© 1997 UW CSE 11/24/97O-1 Recovery Concepts Chapter 18 (lightly)
Database recovery contd…

Answer to Summary Questions
Determining BC/DR Methods
Module – 11 Local Replication
Fujitsu Training Documentation RAID Groups and Volumes
Chapter 9: Virtual Memory
A Technical Overview of Microsoft® SQL Server™ 2005 High Availability Beta 2 Matthew Stephen IT Pro Evangelist (SQL Server)
Introduction to Operating Systems
Oracle9i Database Administrator: Implementation and Administration
Performing Database Recovery
Chapter 5 The Redo Log Files.
Recovery Unit 4.4 Dr Gordon Russell, Napier University
Presentation transcript:

Section 3 : Business Continuity Lecture 29

After completing this chapter you will be able to:  Discuss local replication and the possible uses of local replicas  Explain consistency considerations when replicating file systems and databases  Discuss host and array based replication technologies ◦ Functionality ◦ Differences ◦ Considerations ◦ Selecting the appropriate technology

Upon completion of this lesson, you will be able to:  Define local replication  Discuss the possible uses of local replicas  Explain replica considerations such as Recoverability and Consistency  Describe how consistency is ensured in file system and database replication  Explain Dependent write principle

 Replica - An exact copy  Replication - The process of reproducing data  Local replication - Replicating data within the same array or the same data center SourceReplica (Target) REPLICATION

 Alternate source for backup ◦ An alternative to doing backup on production volumes  Fast recovery ◦ Provide minimal RTO (recovery time objective)  Decision support ◦ Use replicas to run decision support operations such as creating a report ◦ Reduce burden on production volumes  Testing platform ◦ To test critical business data or applications  Data Migration ◦ Use replicas to do data migration instead of production volumes

 Types of Replica: choice of replica tie back into RPO (recovery point objective) ◦ Point-in-Time (PIT)  non zero RPO ◦ Continuous  near zero RPO  What makes a replica good ◦ Recoverability/Re-startability  Replica should be able to restore data on the source device  Restart business operation from replica ◦ Consistency  Ensuring consistency is primary requirement for all the replication technologies

 Ensure data buffered in the host is properly captured on the disk when replica is created ◦ Data is buffered in the host before written to disk  Consistency is required to ensure the usability of replica  Consistency can be achieved in various ways: ◦ For file Systems  Offline: Un-mount file system  Online: Flush host buffers ◦ For Databases  Offline: Shutdown database  Online: Database in hot backup mode  Dependent Write I/O Principle  By Holding I/Os

File System Application Memory Buffers Logical Volume Manager Physical Disk Driver Data Sync Daemon SourceReplica

 Dependent Write: A write I/O that will not be issued by an application until a prior related write I/O has completed ◦ A logical dependency, not a time dependency  Inherent in all Database Management Systems (DBMS) ◦ e.g. Page (data) write is dependent write I/O based on a successful log write  Necessary for protection against local outages ◦ Power failures create a dependent write consistent image ◦ A Restart transforms the dependent write consistent to transitionally consistent  i.e. Committed transactions will be recovered, in-flight transactions will be discarded

 Inconsistent  Consistent SourceReplica SourceReplica 

5 Source Replica Consistent

Key points covered in this lesson:  Possible uses of local replicas ◦ Alternate source for backup ◦ Fast recovery ◦ Decision support ◦ Testing platform ◦ Data Migration  Recoverability and Consistency  File system and database replication consistency  Dependent write I/O principle

Upon completion of this lesson, you will be able to:  Discuss Host and Array based local replication technologies ◦ Options ◦ Operation ◦ Comparison

 Host based ◦ Logical Volume Manager (LVM) based replication (LVM mirroring) ◦ File System Snapshot  Storage Array based ◦ Full volume mirroring ◦ Pointer based full volume replication ◦ Pointer based virtual replication

Host Logical Volume Physical Volume 1 Physical Volume 2

 LVM based replicas add overhead on host CPUs ◦ Each write is translated into two writes on the disk ◦ Can degrade application performance  If host volumes are already storage array LUNs then the added redundancy provided by LVM mirroring is unnecessary ◦ The devices will have some RAID protection already  Both replica and source are stored within the same volume group ◦ Replica cannot be accessed by another host ◦ If server fails, both source and replica would be unavailable  Keeping track of changes on the mirrors is a challenge

 Pointer-based replica ◦ Uses Copy on First Write principle ◦ Uses bitmap and block map  Bitmap: Used to track blocks that have changed on the production/source FS after creation of snap – initially all zero  Block map: Used to indicate block address from which data is to be read when the data is accessed from the Snap FS – initially points to production/source FS ◦ Requires a fraction of the space used by the original FS ◦ Implemented by either FS itself or by LVM

Metadata  Write to Production FS Prod FS Metadata 1 Data a 2 Data b Snap FS 1 Nodata 3 no data 4 no data BitBLK N Data N New writes 3 Data C 2 no data c 2 Data c Data dD 1 no data 1 Data d

 Reads from snap FS ◦ Consult the bitmap  If 0 then direct read to the production FS  If 1 then go to the block map get the block address and read data from that address Metadata Snap FS 1 Nodata 2 Data c 3 no data 4 no data BitBLK Data d Prod FS Metadata 1 Data a 2 Data b 3 Data C 4 Data D N Data N

 Replication performed by the Array Operating Environment  Replicas are on the same array  Types of array based replication ◦ Full-volume mirroring ◦ Pointer-based full-volume replication ◦ Pointer-based virtual replication Production Server BC Server Array Source Replica

 Target is a full physical copy of the source device  Target is attached to the source and data from source is copied to the target  Target is unavailable while it is attached  Target device is as large as the source device  Good for full backup, decision support, development, testing and restore to last PIT Source Target Attached Array Read/WriteNot Ready

 After synchronization, target can be detached from the source and made available for BC (business continuity) operations  PIT is determined by the time of detachment  After detachment, re-synchronization can be incremental ◦ Only updated blocks are resynchronized ◦ Modified blocks are tracked using bitmaps Source Target Detached - PIT Read/Write Array

Attached/ Synchronization Source = Target Detached Source ≠ Target Resynchronization Source = Target

 Provide full copy of source data on the target  Target device is made accessible for business operation as soon as the replication session is started  Point-in-Time is determined by time of session activation  Two modes ◦ Copy on First Access (deferred) ◦ Full Copy mode  Target device is at least as large as the source device

Write to Source Source Target Read/Write Write to Target Read from Target Source Target Source Target Read/Write

 On session start, the entire contents of the Source device is copied to the Target device in the background  If the replication session is terminated, the target will contain all the original data from the source at the PIT of activation ◦ Target can be used for restore and recovery ◦ In CoFA mode, the target will only have data was accessed until termination, and therefore it cannot be used for restore and recovery  Most vendor implementations provide the ability to track changes: ◦ Made to the Source or Target ◦ Enables incremental re-synchronization

 Targets do not hold actual data, but hold pointers to where the data is located ◦ Target requires a small fraction of the size of the source volumes  A replication session is setup between source and target devices ◦ Target devices are accessible immediately when the session is started ◦ At the start of the session the target device holds pointers to data on source device  Typically recommended if the changes to the source are less than 30%

Source Save Location Target Virtual Device

 Changes will/can occur to the Source/Target devices after PIT has been created  How and at what level of granularity should this be tracked ◦ Too expensive to track changes at a bit by bit level  Would require an equivalent amount of storage to keep track ◦ Based on the vendor some level of granularity is chosen and a bit map is created (one for source and one for target)  For example one could choose 32 KB as the granularity  If any change is made to any bit on one 32KB chunk the whole chunk is flagged as changed in the bit map  For 1GB device, map would only take up 32768/8/1024 = 4KB space

Source Target 0= unchanged= changed Logical OR At PIT Target Source After PIT… For resynchronization/restore

 Source has a failure ◦ Logical Corruption ◦ Physical failure of source devices ◦ Failure of Production server  Solution ◦ Restore data from target to source  The restore would typically be done incrementally  Applications can be restarted even before synchronization is complete -----OR ◦ Start production on target  Resolve issues with source while continuing operations on target  After issue resolution restore latest data on target to source

 Before a Restore ◦ Stop all access to the Source and Target devices ◦ Identify target to be used for restore  Based on RPO and Data Consistency ◦ Perform Restore  Before starting production on target ◦ Stop all access to the Source and Target devices ◦ Identify Target to be used for restart  Based on RPO and Data Consistency ◦ Create a “Gold” copy of Target  As a precaution against further failures ◦ Start production on Target

 Pointer-based full volume replicas ◦ Restores can be performed to either the original source device or to any other device of like size  Restores to the original source could be incremental in nature  Restore to a new device would involve a full synchronization  Pointer-based virtual replicas ◦ Restores can be performed to the original source or to any other device of like size as long as the original source device is healthy  Target only has pointers  Pointers to source for data that has not been written to after PIT  Pointers to the “save” location for data was written after PIT  Thus to perform a restore to an alternate volume the source must be healthy to access data that has not yet been copied over to the target

FactorFull-volume mirroring Pointer-based full-volume replication Pointer-based virtual replication Performance impact on source No impact CoFA mode – some impact Full copy – no impact High impact Size of target At least same as the source Small fraction of the source Accessibility of source for restoration Not required CoFA mode – required Full copy – not required Required Accessibility to target Only after synchronization and detachment from the source Immediately accessible

06:00 A.M. : 12 : 01 : 02 : 03 : 04 : 05 : 06 : 07 : 08 : 09 : 10 : 11 : 12 : 01 : 02 : 03 : 04 : 05 : 06 : 07 : 08 : 09 : 10 : 11 : P.M.A.M. 12:00 P.M. 06:00 P.M. 12:00 A.M. Source Target Devices Point-In-Time

 Replication management software residing on storage array  Provides an interface for easy and reliable replication management  Two types of interface: ◦ CLI ◦ GUI

Key points covered in this lesson:  Replication technologies ◦ Host based  LVM based mirroring  File system snapshot ◦ Array based  Full volume mirroring  Pointer-based full volume copy  Pointer-based virtual replica

Key points covered in this chapter:  Definition and possible use of local replicas  Consistency considerations when replicating file systems and databases  Host based replication ◦ LVM based mirroring, File System Snapshot  Storage array based replication ◦ Full volume mirroring, Pointer based full volume and virtual replication ◦ Choice of technology Additional Task Research on EMC Replication Products

 Describe the uses of a local replica in various business operations.  How can consistency be ensured when replicating a database?  What are the differences among full volume mirroring and pointer based replicas?  What is the key difference between full copy mode and deferred mode?  What are the considerations when performing restore operations for each array replication technology?