Metro Mirror, Global Copy, and Global Mirror Quick Reference

Slides:



Advertisements
Similar presentations
© 2010 IBM Corporation ® Tivoli Storage Productivity Center for Replication Billy Olsen.
Advertisements

Module – 9 Introduction to Business continuity
VERITAS Confidential Disaster Recovery – Beyond Backup Jason Phippen – Director Product and Solutions Marketing, EMEA.
© 2009 EMC Corporation. All rights reserved. Introduction to Business Continuity Module 3.1.
High Availability Group 08: Võ Đức Vĩnh Nguyễn Quang Vũ
1 Disk Based Disaster Recovery & Data Replication Solutions Gavin Cole Storage Consultant SEE.
Determining BC/DR Methods Recovery Time Objective – (RTO) Time needed to recover from a disaster How long can you afford to be without your systems Recovery.
1 © Copyright 2010 EMC Corporation. All rights reserved. EMC RecoverPoint/Cluster Enabler for Microsoft Failover Cluster.
Keith Burns Microsoft UK Mission Critical Database.
Introduction to the new mainframe: Large-Scale Commercial Computing © Copyright IBM Corp., All rights reserved. Chapter 4: Integrity and security.
Module – 11 Local Replication
Multiple Replicas. Remote Replication. DR in practice.
Module – 12 Remote Replication
1 I/O Management in Representative Operating Systems.
Section 3 : Business Continuity Lecture 29. After completing this chapter you will be able to:  Discuss local replication and the possible uses of local.
TCP: Software for Reliable Communication. Spring 2002Computer Networks Applications Internet: a Collection of Disparate Networks Different goals: Speed,
1© Copyright 2011 EMC Corporation. All rights reserved. EMC RECOVERPOINT/ CLUSTER ENABLER FOR MICROSOFT FAILOVER CLUSTER.
John Sing/San Jose/IBM April 4, 2012
National Manager Database Services
How to Implement a Cluster of Clusters Atiq Adamjee Senior Architect Novell, Inc. Brad Rupp Software Engineer Novell, Inc.
Remote Replication Chapter 14(9.3) ISMDR:BEIT:VIII:chap9.3:Madhu N:PIIT1.
Building Highly Available Systems with SQL Server™ 2005 Vineet Gupta Evangelist – Data and Integration Microsoft Corp.
© 2006 EMC Corporation. All rights reserved. Business Continuity: Remote Replication Module 4.4.
Business Continuity and Disaster Recovery Chapter 8 Part 2 Pages 914 to 945.
Technology Spotlight on
Guide to Linux Installation and Administration, 2e 1 Chapter 9 Preparing for Emergencies.
Conditions and Terms of Use
Chapter Nine The Session Layer. Objectives We’ll see how a new session is created, maintained, and dismantled. The process of logon authentication will.
Maintaining a Mirrored Database Tips and Tricks by Paul G. Hiles.
Leaders Have Vision™ visionsolutions.com 1 Leading Edge Solutions, Proven Technologies Anne-Elisabeth Caillot Pre-Sales & Business Development
DATABASE MIRRORING  Mirroring is mainly implemented for increasing the database availability.  Is configured on a Database level.  Mainly involves two.
DB-2: OpenEdge® Replication: How to get Home in Time … Brian Bowman Sr. Solutions Engineer Sandy Caiado Sr. Solutions Engineer.
Continuous Access Overview Damian McNamara Consultant.
Storage 101: Bringing Up SAN Garry Moreau Senior Staff Alliance Consultant Ciena Communications (763)
Introduction to DFS. Distributed File Systems A file system whose clients, servers and storage devices are dispersed among the machines of a distributed.
Microsoft Virtual Academy. First HalfSecond Half (01) Introduction to Microsoft Virtualization(05) Hyper-V Management (02) Hyper-V Infrastructure (06)
Business Continuity Overview
14 Copyright © 2005, Oracle. All rights reserved. Backup and Recovery Concepts.
© 2009 EMC Corporation. All rights reserved. Remote Replication Module 3.4.
High Availability in DB2 Nishant Sinha
Peter Mattei HP Storage Consultant 16. May 2013
14 Copyright © 2005, Oracle. All rights reserved. Backup and Recovery Concepts.
Virtual Machine Movement and Hyper-V Replica
© 2009 IBM Corporation Statements of IBM future plans and directions are provided for information purposes only. Plans and direction are subject to change.
Log Shipping, Mirroring, Replication and Clustering Which should I use? That depends on a few questions we must ask the user. We will go over these questions.
IBM Systems & Technology Group © 2004 IBM Corporation Global Copy, Metro Mirror, Global Mirror, and Metro/Global Mirror Overview Charlie Burger Storage.
IBM ATS Storage © 2013 IBM Corporation What are Consistency Groups ? The concept of grouping all system, middleware, and application volumes that are required.
SUSE ® Linux Enterprise High Availability Extension.
Advanced Technical Support (ATS) Americas © 2007 IBM Corporation What is FlashCopy? FlashCopy® is an “instant” T0 (Time 0) copy where the source and target.
Business Continuity & Disaster Recovery
Added Value of HA/DR Processing
Determining BC/DR Methods
High Availability Options with Storage
Direct Attached Storage and Introduction to SCSI
A Technical Overview of Microsoft® SQL Server™ 2005 High Availability Beta 2 Matthew Stephen IT Pro Evangelist (SQL Server)
Switching Techniques In large networks there might be multiple paths linking sender and receiver. Information may be switched as it travels through various.
Business Continuity & Disaster Recovery
Direct Attached Storage and Introduction to SCSI
Disaster Recovery Services
11/10/2018 Desktop Virtualization Corey Hynes Kyle Rosenthal President Technical Lead HynesITe Inc Spider Consulting @windowspcguy.
Microsoft Azure P wer Lunch
Microsoft Virtual Academy
Protocol Basics.
Migrating to Office 365 from Google mail and exchange
DS6000/DS8000 FlashCopy in z/OS Environments Quick Reference
Designing a Highly Available SQL Server Infrastructure
Microsoft Virtual Academy
Presentation transcript:

Metro Mirror, Global Copy, and Global Mirror Quick Reference Background Information Dependent write means that the start of one write operation is dependent upon the completion of a previous write to a volume in either the same disk system or a different disk system. They are the basis for providing consistent data. Consistency means that the order of dependent writes is maintained. With consistent data at the remote site, it is possible to do a data base restart, which is much faster than data base recovery. Asynchronous processing means the separation of data transmission from the signaling of I/O complete. The order of dependent writes is not maintained, so data is inconsistent. Manual intervention is required to create consistent data. Distance between local and remote has little impact upon the response time of the primary volume. Synchronous processing is when the I/O complete is not returned until the update is transmitted to the remote site and acknowledged by the remote. Recovery Time Objective (RTO) The elapsed time after an outage before the Application is back up and running. (Server/Network/Workload & Data all available again.) Recovery Point Objective (RPO) The amount of data lost in the event of a disaster. (Transactions that had been completed but we lost as they have not been reflected at the recovery site.) PPRC Rules PPRC secondary volumes must be off-line to all attached systems One-to-one PPRC volume relationship Only IBM (DS8000/DS6000/ESS)-to-IBM supported PPRC ports defined as SCSI-FCP Logical paths must be established between LSSs FCP links are bidirectional and can be shared with server data transfer Up to 8 links per LSS More than 8 links per physical subsystem is allowed Up to 4 LSS secondaries can be connected to a primary LSS A secondary LSS can be connected to as many primary LSS systems as links are available Distance FCP links 300 KM without RPQ Metro Mirror Uses synchronous processing to mirror data The sites using Metro Mirror must conform to the following conditions: Can accept some performance impact to application write I/O RPO=0 possible using automation Distance within maximum limits 300 KM fcp links without RPQ Metro Mirror provides consistency by: Specifying consistency group on the establish path command If an error occurs between local and remote sites, the pair will suspend with messages or SNMP alerts being issued. Automation catches the alert and issues FREEZE/RUN commands to LSSs that have dependent data. The FREEZE command also deletes the paths from the LSS. Writes to these volumes will receive extended long busy (CKD) or I/O queue full (Open) and dependent writes will be queued. Once all of the LSSs have been frozen, you can issue RUN commands to them to once again allow writes. Otherwise the condition lasts 2 minutes. This solution requires automation when consistency is desired across multiple disk systems and is highly recommended in one-to-one environments. Planned outage for testing: FREEZE/RUN to LSSs to suspend pairs (also deletes paths) Failover to B volumes FlashCopy B to C volumes Establish paths from local to remote Failback A to B volumes Test on C volume Metro Mirror, Global Copy, and Global Mirror Quick Reference This document covers: Dependent writes and consistency PPRC rules Metro Mirror Global Copy Global Mirror How each solution maintains consistency Planned and unplanned procedures for each solution March 2007 © International Business Machines Corporation, 2007. IBM, and FlashCopy are registered trademarks of International Business Machines in the U.S. and other countries. Other company, product, or service names may be trademarks or service marks of others. IBM reserves the right to change specifications or other product information without notice. This publication may include typographic errors and technical inaccuracies. The content is provided as is, without express or implied warranties of any kind, including the implied warranties of merchantability or fitness for a particular purpose.

Global Mirror for zSeries (XRC) Metro Mirror Unplanned outage: Automation detects problem and issues FREEZE/RUN commands to LSSs and pairs suspend (also deletes paths) Failover to B volumes Recovery started at remote Local site returns Establish paths remote to local Failback B to A volumes Shutdown recovery FREEZE LSSs to suspend pairs Failover to local Establish paths local to remote Failback A to B volumes Start production Global Copy: Uses asynchronous processing to transmit data Global Copy is designed for those sites that conform to the following conditions: RPO of many hours or days Distances that may be > 300 KM Global Copy provides consistency by: Quiescing application at primary Allow the data to drain or go-to-sync FREEZE/RUN the pairs Consistency Group is NOT specified on establish path; therefore, extended long busy or I/O queue full not returned to server FlashCopy B volumes to C (C can be used for disaster recovery) Reestablish paths Resync the pairs Planned outage procedure same as forming consistency group Global Copy is a wonderful tool for data migration. Global Mirror Uses asynchronous processing to mirror data Global Mirror is designed for those sites that conform to the following conditions: RPO > 0 but still very current Distance that are greater than 300 KM Total of 8 disk systems in any combination of primary and secondary subsystems is supported. An RPQ can be requested to add more systems to the session. Global Mirror provides consistency by: Master session signals form consistency group Coordination time Creates Change Recording (CR) bit map and during this time, write I/Os are held Maintains order of dependent writes Once all CR bit maps are created, writes are once again responded to Drain Out of Sync (OOS) bit map to remote FlashCopy® secondary (B) volume to C volume Planned outage for testing: Pause Global Mirror session FREEZE/RUN to LSSs to suspend pairs Failover to B volumes Fast Reverse Restore (FRR) FlashCopy C>B Wait for background copy to complete FlashCopy B>C with the same options used during Global Mirror setup FlashCopy B>D Reestablish paths local to remote Failback A>B Resume session Test on D Global Mirror (continued) Unplanned outage: Terminate Global Mirror session if still active Issue Failover to B volumes Check FlashCopy status Revert or commit Fast Reverse Restore C>B Wait for background copy to complete FlashCopy B>C with same options used during initial Global Mirror setup Recovery can now be started on the B volumes Local site returns Establish paths from remote to local Failback with XD or gcp option to B volumes Once Global Copy is in steady state , quiesce host activity Insure that OOS=0 after quiescing host activity Failover to A volumes with XD or gcp option Establish paths from local to remote Failback to A volumes with XD or gcp option Reestablish any Master to Subordinate links that may have been lost Start the Global Mirror session Resume production activity to the A volumes CHAN EXT FC ‘C’ Target PPRC 'A' PRIMARY CHAN EXT PPRC ‘B’ Secondary and FC Source Global Copy over long distance FlashCopy Summary Global Mirror for zSeries (XRC) Metro Mirror Global Copy Global Mirror RPO > 0 (3-5 sec) > 0 (hours) > 0 (3-5 sec) Distance No Limit 300 KM (fcp) No limit Asynch Yes No CKD FB CKD & FB