IBM Systems & Technology Group © 2004 IBM Corporation Global Copy, Metro Mirror, Global Mirror, and Metro/Global Mirror Overview Charlie Burger Storage.

Slides:



Advertisements
Similar presentations
© 2006 DataCore Software Corp DataCore Traveller Travel in Time : Do More with Time The Continuous Protection and Recovery (CPR) Solution Time Optimized.
Advertisements

Best Practices for Backing Up Your System
© 2010 IBM Corporation ® Tivoli Storage Productivity Center for Replication Billy Olsen.
SQL Server Disaster Recovery Chris Shaw Sr. SQL Server DBA, Xtivia Inc.
1 Disk Based Disaster Recovery & Data Replication Solutions Gavin Cole Storage Consultant SEE.
Oracle Data Guard Ensuring Disaster Recovery for Enterprise Data
1 © Copyright 2010 EMC Corporation. All rights reserved. EMC RecoverPoint/Cluster Enabler for Microsoft Failover Cluster.
Business Continuity and DR, A Practical Implementation Mich Talebzadeh, Consultant, Deutsche Bank
Introduction to the new mainframe: Large-Scale Commercial Computing © Copyright IBM Corp., All rights reserved. Chapter 4: Integrity and security.
Module – 11 Local Replication
Multiple Replicas. Remote Replication. DR in practice.
Module – 12 Remote Replication
Section 3 : Business Continuity Lecture 29. After completing this chapter you will be able to:  Discuss local replication and the possible uses of local.
1© Copyright 2011 EMC Corporation. All rights reserved. EMC RECOVERPOINT/ CLUSTER ENABLER FOR MICROSOFT FAILOVER CLUSTER.
John Sing/San Jose/IBM April 4, 2012
National Manager Database Services
IBM TotalStorage ® IBM logo must not be moved, added to, or altered in any way. © 2007 IBM Corporation Break through with IBM TotalStorage Business Continuity.
Remote Replication Chapter 14(9.3) ISMDR:BEIT:VIII:chap9.3:Madhu N:PIIT1.
Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.
Chapter 10 : Designing a SQL Server 2005 Solution for High Availability MCITP Administrator: Microsoft SQL Server 2005 Database Server Infrastructure Design.
© 2006 EMC Corporation. All rights reserved. Business Continuity: Remote Replication Module 4.4.
Technology Spotlight on
Lecture 9 of Advanced Databases Storage and File Structure (Part II) Instructor: Mr.Ahmed Al Astal.
1 Lecture 4: Threads Operating System Fall Contents Overview: Processes & Threads Benefits of Threads Thread State and Operations User Thread.
Window NT File System JianJing Cao (#98284).
DATABASE MIRRORING  Mirroring is mainly implemented for increasing the database availability.  Is configured on a Database level.  Mainly involves two.
Continuous Access Overview Damian McNamara Consultant.
Click to add text Introduction to the new mainframe: Large-Scale Commercial Computing © Copyright IBM Corp., All rights reserved. Chapter 4: Integrity.
Storage 101: Bringing Up SAN Garry Moreau Senior Staff Alliance Consultant Ciena Communications (763)
MCTS Guide to Microsoft Windows Vista Chapter 4 Managing Disks.
Lecture 3 Process Concepts. What is a Process? A process is the dynamic execution context of an executing program. Several processes may run concurrently,
Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.
Systems Management Server 2.0: Backup and Recovery Overview SMS Recovery Web Site location: Updated.
Data Sharing. Data Sharing in a Sysplex Connecting a large number of systems together brings with it special considerations, such as how the large number.
11 CLUSTERING AND AVAILABILITY Chapter 11. Chapter 11: CLUSTERING AND AVAILABILITY2 OVERVIEW  Describe the clustering capabilities of Microsoft Windows.
© 2009 EMC Corporation. All rights reserved. Remote Replication Module 3.4.
CE Operating Systems Lecture 2 Low level hardware support for operating systems.
High Availability in DB2 Nishant Sinha
Peter Mattei HP Storage Consultant 16. May 2013
Click to add text Introduction to the new mainframe: Large-Scale Commercial Computing © Copyright IBM Corp., All rights reserved. Chapter 6: Accessing.
 The End to the Means › (According to IBM ) › 03.ibm.com/innovation/us/thesmartercity/in dex_flash.html?cmp=blank&cm=v&csr=chap ter_edu&cr=youtube&ct=usbrv111&cn=agus.
Chapter 20 Parallel Sysplex
Coupling Facility. The S/390 Coupling Facility (CF), the key component of the Parallel Sysplex cluster, enables multisystem coordination and datasharing.
CE Operating Systems Lecture 2 Low level hardware support for operating systems.
14 Copyright © 2005, Oracle. All rights reserved. Backup and Recovery Concepts.
Virtual Machine Movement and Hyper-V Replica
Hands-On Microsoft Windows Server 2008 Chapter 7 Configuring and Managing Data Storage.
© 2009 IBM Corporation Statements of IBM future plans and directions are provided for information purposes only. Plans and direction are subject to change.
Log Shipping, Mirroring, Replication and Clustering Which should I use? That depends on a few questions we must ask the user. We will go over these questions.
© 2012 IBM Corporation IBM HyperSwap Options September 2012.
DS8000 Host Adapter Configuration Guidelines October 21, 2015
sBC09 - Business Resiliency Solutions with the DS8870
IBM ATS Storage © 2013 IBM Corporation What are Consistency Groups ? The concept of grouping all system, middleware, and application volumes that are required.
Advanced Technical Support (ATS) Americas © 2007 IBM Corporation What is FlashCopy? FlashCopy® is an “instant” T0 (Time 0) copy where the source and target.
IBM Advanced Technical Support (ATS) © 2010 IBM Corporation What is HyperSwap ? Ability to swap enterprise class System z™ data centers in a matter of.
Database Recovery Techniques
Metro Mirror, Global Copy, and Global Mirror Quick Reference
Tivoli Storage Manager Product Family
Determining BC/DR Methods
High Availability Options with Storage
Virtualization Engine console Bridge Concepts
A Technical Overview of Microsoft® SQL Server™ 2005 High Availability Beta 2 Matthew Stephen IT Pro Evangelist (SQL Server)
SQL Server High Availability Amit Vaid.
Microsoft Azure P wer Lunch
FlashCopy for Open Environments
DS6000/DS8000 FlashCopy in z/OS Environments Quick Reference
Using the Cloud for Backup, Archiving & Disaster Recovery
Distributed Availability Groups
Presentation transcript:

IBM Systems & Technology Group © 2004 IBM Corporation Global Copy, Metro Mirror, Global Mirror, and Metro/Global Mirror Overview Charlie Burger Storage Systems Advanced Technical Support

Advanced Technical Support, Americas © 2005 IBM Corporation 2 Table of Contents  Background –Consistency and dependent writes  Peer-to-Peer Remote Copy - PPRC  PPRC Considerations  Establish Path Considerations  FCP vs ESCON Links  Metro Mirror  Global Copy  Global Mirror  Metro/Global Mirror  Failover  Failback  Addendum

Advanced Technical Support, Americas © 2005 IBM Corporation 3 Background  Dependent writes –The start of one write operation is dependent upon the completion of a previous write to a disk in either the same subsystem frame or a different subsystem frame –Basis for providing consistent data for copy operations  Consistency –Preserves the order of dependent writes For databases, consistent data provides the capability to perform a data base restart rather than a data base recovery –Restart can be measured in minutes while recovery could be hours or even days  Asynchronous processing –The separation of data transmission from the signaling of I/O complete Distance between primary and secondary has little impact upon the response time of the primary volume Helps minimize impact to application performance

Advanced Technical Support, Americas © 2005 IBM Corporation 4  "DB Restart" - To start a DataBase application following an outage without having to restore the database. This is a process measured in minutes.  and avoid: "DB Recover" - Restore last set of DataBase Image Copy tapes and apply log changes to bring database up to point of failure. This is a process measured in hours or even days Consistency provides: Operations Staff Applications Staff Network Staff Management Control Physical Facilities Telecom Network Data Operating System Applications AS/400's RS/6000 Other UNIX/NT S/390 AS/400's RS/6000 Other UNIX/NT Database RESTART or RECOVER?

Advanced Technical Support, Americas © 2005 IBM Corporation 5 Dependent Writes  Many examples where start of one write operation is time dependent on the completion of a previous write on a different disk group or even different disk frame  Data base & log for example  Synchronous copy insures data integrity B L X B L X C M Y C M Y DataBase Application OK 1. Log Update 3. Mark Log Complete 2. Update database

Advanced Technical Support, Americas © 2005 IBM Corporation 6 Dependent Write Consistency (1)  Scenario: 'Update Database' doesn't get propagated to 2nd site  Synchronous copy and 'dependant writes' means 'Mark Log Complete' will never be issued by application  Result:  Database is consistent B L X B L X C M Y C M Y DataBase Application OK 1. Log Update X

Advanced Technical Support, Americas © 2005 IBM Corporation 7 Dependent Write Consistency (2)  Scenario: 'Mark Log Complete' doesn't get propagated to 2nd site  Result:  Secondary site logs say update was not completed  Backout of valid data will be done upon restart at secondary site  But database is consistent B L X B L X C M Y C M Y DataBase Application OK 1. Log Update X 2. Update database

Advanced Technical Support, Americas © 2005 IBM Corporation 8 Consistency  To achieve consistency at a remote mirror location you must maintain the order of dependent writes –You cannot have a write to one volume mirrored and then have a dependent write to another volume not mirrored  The remote mirror functions each maintain consistency in their own way –Metro Mirror uses ELB (Extended Long Busy) for CKD volumes and I/O Queue Full for FB LUNs –Global Mirror holds write I/Os while building an alternate bit map prior before draining the OOS (out of sync) bit map when creating a consistency group – z/OS Global Mirror (XRC) uses timestamps to create consistency groups –Global Copy requires procedures to create consistency

Advanced Technical Support, Americas © 2005 IBM Corporation 9 Peer-to-Peer Remote Copy - PPRC  Metro Mirror – Synchronous PPRC –Synchronous mirroring with consistency at the remote site RPO of 0  Global Copy – PPRC Extended Distance (XD) –Asynchronous mirroring without consistency at the remote site Consistency manually created by user –RPO determined by how often user is willing to create consistent data at the remote  Global Mirror –Asynchronous mirroring with consistency at the remote site RPO can be somewhere between 3-5 seconds  Metro/Global Mirror –Three site mirroring solution using Metro Mirror between site 1 and site 2 and Global Mirror between site 2 and site 3 Consistency maintained at sites 2 and 3 –RPO at site 2 near 0 –RPO at site 3 near 0 if site 1 is lost –RPO at site 3 somewhere between 3-5 seconds if site 2 is lost

Advanced Technical Support, Americas © 2005 IBM Corporation 10  PPRC secondary volume must be off-line to all attached systems  One-to-one PPRC volume relationship  Only IBM (DS8000/DS6000/ESS) to IBM is supported Logical paths have to be established between Logical Subsystems FCP links are bidirectional FCP links can also be used for server data transfer Up to 8 links per LSS  More than 8 links per physical subsystem is allowed Up to 4 LSS secondaries can be connected to a primary LSS  A secondary LSS can be connected to as many primary LSS systems as links are available  Distance  ESCON links KM  FCP links KM PPRC Considerations

Advanced Technical Support, Americas © 2005 IBM Corporation 11 Establish Path Considerations  If paths have been established, issuing another path establish will overlay the existing established path  For example: –2 Paths are established using this command –I wish to add another path so this command is issued –The result will be the loss of the 2 previously established paths and only having the new path established –To add the path, the following should be issued mkpprcpath -dev IBM FA120 -remotedev IBM FA150 -srclss 01 -tgtlss 01 –remotewwnn A000F I1A10:I2A20 I1A11:I2A21 mkpprcpath -dev IBM FA120 -remotedev IBM FA150 -srclss 01 -tgtlss 01 –remotewwnn A000F I0100:I0100 mkpprcpath -dev IBM FA120 -remotedev IBM FA150 -srclss 01 -tgtlss 01 –remotewwnn A000F I0100:I0100 I1A10:I2A20 I1A11:I2A21

Advanced Technical Support, Americas © 2005 IBM Corporation 12 FCP vs ESCON ESCON links run at 17. MB/sec with a sustained rate of approximately MB/sec Fibre links can run MB/sec depending upon the adapter with a sustained rate of approximately MB/sec DS8000 and DS6000 only support PPRC fcp links ESS 800 supports both PPRC ESCON and fcp links

Advanced Technical Support, Americas © 2005 IBM Corporation 13 What is Metro Mirror?  Disaster protection for all IBM supported platforms  Other potential uses: – Data migration/movement between devices – Data workload migration to alternate site  Hardware and LIC solution  Synchronous copy, mirroring (RAID 1) to another DS8000/DS6000/ESS  Application independent  Some performance impact on application I/Os  Established at a disk level  A 2 site solution

Advanced Technical Support, Americas © 2005 IBM Corporation 14 Profile of a PPRC Synchronous Write Synchronous write 1Write (channel end) 1. 1Write (channel end) 2. 2Write to secondary 4. 4Acknowledgement (Device end -- I/O complete 3. 3Write acknowledged by secondary

Advanced Technical Support, Americas © 2005 IBM Corporation 15 Maintaining Consistency – Metro Mirror  Consistency group on the establish path commands –Loss of communication between primary and secondary sites will cause an extended long busy (ELB) for zSeries and I/O queue full for open systems to be returned to any write issued to a volume in the LSS that lost communication  Automation can issue FREEZE commands to all of the LSSs that have dependent data with the LSS that lost communication –Returning ELB or I/O queue full causes the next dependent write to NOT be issued maintaining the order of the dependent writes –After all of the FREEZE commands have been issued, RUN commands can be issued to the LSSs to resume, otherwise the ELB or I/O queue full will be returned for a default of 2 minutes  Automation is required when dependent data spans across multiple physical disk subsystems and is HIGHLY recommended when all primaries are within a single subsystem

Advanced Technical Support, Americas © 2005 IBM Corporation 16 Consistency – Metro Mirror One Physical Subsystem x 1 No automation All paths lost means that no updates are xmitted to secondaries and consistency is maintained

Advanced Technical Support, Americas © 2005 IBM Corporation 17 Consistency – Metro Mirror One Physical Subsystem x No automation One pair suspends, others still mirror, lose consistency

Advanced Technical Support, Americas © 2005 IBM Corporation 18 Consistency – Metro Mirror Multiple Physical Subsystems x No Automation 4 Without automation, order of dependent writes not maintained and consistency is lost

Advanced Technical Support, Americas © 2005 IBM Corporation 19 Consistency – Metro Mirror Multiple Physical Subsystems 1 2 x Automation 3 4 Automation insures that the order of dependent writes is maintained

Advanced Technical Support, Americas © 2005 IBM Corporation 20  Recovery system required to be current with the primary application system  Can accept some performance impact to application write I/O operations at the primary location  Recovery is on a disk-by-disk basis  Distance within maximum limits  103 KM for ESCON links and 300 KM for fcp links  RPQ for greater disances When to use Metro Mirror

Advanced Technical Support, Americas © 2005 IBM Corporation 21 Metro Mirror Sequential Write Data Rate – Turbo R2

Advanced Technical Support, Americas © 2005 IBM Corporation 22 Pre-Turbo DS8000 Metro Mirror 4 KB Write Service Time Comparisons

Advanced Technical Support, Americas © 2005 IBM Corporation 23 Pre-Turbo DS8100 Metro Mirror Sequential Write Throughput

Advanced Technical Support, Americas © 2005 IBM Corporation 24 Pre-Turbo DS8100 Metro Mirror Sequential Write Throughput

Advanced Technical Support, Americas © 2005 IBM Corporation 25 What is Global Copy?  Global Copy uses an additional PPRC mode designed for high performance data copy at long distances –TSP OPTION(XD) Extended Distance or ds cli –type gcp –Disk level option  Asynchronous transfer of application primary writes to secondary allows mirroring over long distances with minimal impact to host performance –Writes to primary disk receive immediate completion status while in XD mode  Writes can be out of sequence on secondary disk –Develop procedures to create a point in time consistency  A 2 site solution

Advanced Technical Support, Americas © 2005 IBM Corporation 26 Profile of an Asynchronous Write Write (channel end) Write to secondary Write acknowledged by secondary Acknowledgement (Device end -- I/O complete Synchronous write Asynchronous write 1. Write 4. Write acknowledged by secondary 3. Write to secondary 2. Write acknowledgement (channel end / device end)

Advanced Technical Support, Americas © 2005 IBM Corporation 27 Global Copy – How it works (1)  Synchronous PPRC establish (initial copy) is done in two phases: –Phase I - Copy all tracks in the volume starting at zero and going to the end of the volume. Use a bitmap to keep track of which tracks need to be copied. Do not transfer any host updates - just set the bit in the bitmap for new host updates. –Phase II - Go back through the bitmap to copy any host updates received while in Phase I. Any host updates received during this phase, and for the remainder of the PPRC pair life, will be sent synchronously to the remote volume.  Extended Distance PPRC –Stay in Phase I forever –No impact to host write response time –Copy at remote site is "fuzzy" - updates are not sent in order or in time consistent groups

Advanced Technical Support, Americas © 2005 IBM Corporation 28 Global Copy – How it works (2)  Establish PPRC pairs with Extended Distance option –Writes to primary receive immediate completion status  Primary records updated tracks in a bitmap  Incremental copy of changed tracks or records periodically sent to secondary  To create a point in time consistency: –Transition to PPRC synchronous until full duplex state is reached Usually a matter of seconds –Alternatively, quiesce of I/O and flushing of buffers on primary host will result in consistent secondary disk

Advanced Technical Support, Americas © 2005 IBM Corporation 29 Global Copy – How it works (4)  Agents process a volume using the Out-of-Sync (OOS) bit map to determine which tracks to xmit  Not all volumes are processed at the same time  As the volume is processed, tracks updated behind the active track being xmitted is recorded in the OOS and will be processed the next time Primary Secondary Primary Secondary Primary Secondary Primary Secondary

Advanced Technical Support, Americas © 2005 IBM Corporation 30 PPRC State Changes  Transition to simplex means PPRC is withdrawn  XD is established at the volume/LUN level

Advanced Technical Support, Americas © 2005 IBM Corporation 31 Volume State Transitioning  SYNC SIMPLEX CDELPAIR  SIMPLEX SYNC CESTPAIR OPTION(SYNC)  SYNC SUSP CSUSPEND  SUSP SYNC CESTPAIR MODE(RESYNC)  XD SUSP CSUSPEND  SUSP XD CESTPAIR MODE(RESYNC) OPTION(XD)  XD SIMPLEX CDELPAIR  SIMPLEX XD CESTPAIR OPTION(XD)  SUSP SIMPLEX CDELPAIR  XD SYNC CESTPAIR OPTION(SYNC) To transition from….. To….. Use the following command…..

Advanced Technical Support, Americas © 2005 IBM Corporation 32 Maintaining Consistency – Global Copy  Consistency group is NOT specified on the establish path command –Data on Global Copy secondaries is not consistent so there is no need to maintain the order of dependent writes  Consistent data is created by the user –Quiesce I/O –Suspend the pairs FREEZE can be used and ELB will not be returned to the server since consistency group was NOT specified on the establish path –FlashCopy secondary to tertiary Tertiary will have consistent data –Reestablish paths (if necessary) –RESYNC (resumepprc) Global Copy

Advanced Technical Support, Americas © 2005 IBM Corporation 33 When to use Global Copy  Recovery system does not need to be current with the primary application system –RPO in the range of hours or days –User creates consistent copy of recovery data  Minor impact to application write I/O operations at the primary location  Recovery uses copies of data created by the user on tertiary volumes  Distance beyond ESCON or fcp limits –103 KM for ESCON links and 300 KM for fcp links RPQ for greater disances  A great tool for migrating data

Advanced Technical Support, Americas © 2005 IBM Corporation 34 What is Global Mirror?  A Disaster Recovery (DR) data replication solution –Reduced (less than peak bandwidth) network bandwidth requirements (duplicate writes not sent)  A 2 site solution  Asynchronous data transfer –No impact to the production write I/Os  Peer-to-peer (no, outside the box, server MIPS) –Microcode controlled –Peer-to-peer data copy mechanism is Global Copy –Consistency Group formation mechanism is FlashCopy  3 copies (A  B  C) –Or 4 copies (if test/practice copy (D copy) & DR is to be continued)  Unlimited distance  Very little data loss (Recovery Point Objective (RPO)) –Single digit seconds (goal was/is 3-5 seconds)  Scalable –Up to 8 primary and secondary physical subsystems More with an RPQ

Advanced Technical Support, Americas © 2005 IBM Corporation 35 Global Mirror: Basic concept  Concept  Asynchronous long distance copy (Global Copy), i.e., little to no impact to application writes  Momentarily pause application writes (fraction of millisecond to few milliseconds)  Create point in time consistency group across all primary subsystems (in OOS bitmap)  New updates saved in Change Recording bitmap  Restart application writes and complete write (drain) of point in time consistent data to remote site  Stop drain of data from primary (after all consistent data has been copied to secondary)  Logically FlashCopy all data (i.e., 2 nd ary is consistent, now make tertiary look like 2 nd ary)  Restart Global Copy writes from primary  Automatic repeat of sequence every few seconds to minutes to hours (selectable and can be immediate)  Intended benefit  Long distance, no application impact (adjusts to peak workloads automatically), small RPO, remote copy solution for zSeries and Open Systems data, and consistency across multiple subsystems Host I/O Tertiary Primar y Secondary Remot e Site Local Site (Asynchronous) Global Copy FlashCopy (record, nocopy, persistent, inhibit target write) Global Copy (PPRC-XD) over long distance Could require channel extenders FCP links only

Advanced Technical Support, Americas © 2005 IBM Corporation 36 Consistency Group Formation Coordinate local units Let CG data drain to remote Record new writes in bitmaps but do not copy to remote FlashCopy Relationships being established All FlashCopy Relationships established... CG Interval Time Coordination Time Drain Time... Global Copy continually cycles through volume bitmaps copying changed data to remote mirror volumes

Advanced Technical Support, Americas © 2005 IBM Corporation 37 Tuneables (input parameters)  Maximum Coordination Time –Maximum allowed pause of production write updates for the Consistency Group coordination action I.e., when the Master coordinates the formation of the Consistency Group with all Subordinates –When coordination is completed, writes are allowed to continue –Default = 50 milliseconds {Range: 0 to ms (65+ seconds)} –If the ‘coordination time” is exceeded, coordination is stopped and all writes are allowed to continue –Design point is 2-3 ms  Maximum Drain Time –Maximum CG drain time in seconds before failing (terminating) current drain activity –Default = 30 seconds {Range: 0 to (just over 18 hours)} –After 5 failures, drain time is infinite, i.e., until a consistency group is form, i.e., completely drained  Consistency Group Interval Time –Time to wait before again starting the next consistency group formation process –Default = 0 seconds {Range = 0 to seconds (just over 18 hours)}

Advanced Technical Support, Americas © 2005 IBM Corporation 38 Typical Global Mirror Configuration Remote Site When forming a Consistency Group, PPRC-XD continues transmitting / draining the consistent data to the secondary site. Once the consistency group is formed, the new update data will be transmitted as in a PPRC-XD environment without Asynchronous PPRC. Host I/O Local Site Master Subordinate Note: The Master performs the same operations on volumes in the consistency group in its box when it directs the Subordinates to perform operations. LS S Master communicates with Subordinates to form consistency groups Once the A volumes have been drained to the B volumes, the B volumes will be FlashCopied to the C volumes. One Master, multiple Subordinates  Multiple primary to multiple secondary subsystems  Consistency across all primary subsystems

Advanced Technical Support, Americas © 2005 IBM Corporation 39 Global Mirror Initialization Process 6. Start Global Mirror with Start command sent to Mast 1. Establish Global Copy paths 3. Establish FlashCopy pairs FlashCop y 2. Establish Global Copy pairs 5. Establish control paths between Master and Subordinates Note: These paths could be created earlier Master Subordinate 4. Define Global Mirror session and add volumes to the session Wait until Global Copy pairs have completed 1 st pass copy then establish FlashCopy pairs

Advanced Technical Support, Americas © 2005 IBM Corporation 40 Maintaining Consistency – Global Mirror  Consistency group is NOT specified on the establish path command –Loss of communication will NOT cause ELB to be returned for writes –FREEZE command can be used to suspend pairs after Global Mirror session is paused but ELB will NOT be returned for writes to LSSs  Consistency is maintained by not returning CE/DE or I/O complete during the coordination phase when forming a Consistency Group –Not returning CE/DE or I/O complete causes the next dependent write to NOT be issued maintaining the order of the dependent writes

Advanced Technical Support, Americas © 2005 IBM Corporation 41 When to use Global Mirror?  RPO can be greater than 0 but still needs to be very current –In the single digit second range  Limited impact to application write I/O operations at the primary location –Asynchronous data transfer  Recovery is on a disk-by-disk basis  Distance exceeds maximum limits for synchronous data transfer –300 KM for fcp links Global Mirror only supports fcp links

Advanced Technical Support, Americas © 2005 IBM Corporation 42 Global Mirror at 1000 mi DS8300 vs ESS 800 (both w/ 128 x 10k RPM disk)

Advanced Technical Support, Americas © 2005 IBM Corporation 43 What is Metro/Global Mirror  A 3 site Disaster Recovery (DR) data replication solution –Metro Mirror from local (A) to intermediate (B) and Global Mirror from intermediate to remote (C) –The Metro Mirror secondary is cascaded to the remote site  4 copies of data (A  B  C  D) –C and D are Global Mirror secondary and FlashCopy volumes –Or 5 copies (if test/practice copy (D copy) & DR is to be continued)  Unlimited distance between intermediate site and remote  RPO of 0 for “A” site failure –Zero RPO implies automation to ensure no production updates if mirroring stops  Potential RPO of 3-5 seconds for “A” and “B” twin site failure –Depends on workload and bandwidth between B and C

Advanced Technical Support, Americas © 2005 IBM Corporation 44 When to use Metro/Global Mirror  When two recovery sites are required

Advanced Technical Support, Americas © 2005 IBM Corporation 45 Remote Mirror Comparisons Global Mirror for zSeries (XRC) Metro Mirror (PPRC) Global Copy (PPRC-XD) Global Mirror RPO> 0 (3-5 sec)0> 0 (hours)> 0 (3-5 sec) DistanceUnlimited300 KM (fcp)Unlimited CKDYes FBYes CKD & FBNoYes

Advanced Technical Support, Americas © 2005 IBM Corporation 46 Failover Processing (1)  The secondary volume to which the command was issued becomes a suspended PPRC primary  The targeted volume gets a Change Recording bitmap –Used to track changes that make it different from its partner  Establishes a new relationship between the volume the command was issued to and its PPRC primary volume  Valid for both Metro Mirror and Global Copy Before After Primary Secondary Primary GC or MM ? Failover SUS or active SUS or active SUS CR

Advanced Technical Support, Americas © 2005 IBM Corporation 47 Failover Processing (2)  No communication occurs between the two volumes  Typically, failover is used when the relationship between the volumes is suspended –Consider a path failure – Primary goes suspended, Secondary does not know anything is wrong, is not suspended  If the relationship is NOT suspended when the command is issued: –The secondary volume WILL become a suspended primary –The primary volume will BECOME a suspended primary when host I/O is targeted to the volume – or a suspend command is issued to the primary volume –If neither I/O nor suspend occur, problems may arise during failback

Advanced Technical Support, Americas © 2005 IBM Corporation 48 Failback Processing (1)  The primary volume to which this command is issued has it’s PPRC partner converted, if necessary, to a PPRC secondary  A path(s) must exist between the pairs  The volume to which the command was issued –Combines the partner bitmaps to get total “difference” –begins to resync to its partner which becomes a PPRC secondary volume – data begins to transfer Failback Before Primary SUS During Primary Secondary After Primary Secondary GC GC or MM CR OOS Example 1: Failback original Primary

Advanced Technical Support, Americas © 2005 IBM Corporation 49 Failback Processing (2)  Similar to all PPRC establish operations, the resync begins processing in “Global Copy” mode (First Pass) keeping track of updates received during the resync (CR bitmap)  The pairs will return to their original mode (Global Copy or Metro Mirror) at the conclusion of the resync operation  Failover and Failback are applicable to all PPRC relationships, not just Global Mirror, as we will see in later lectures and labs Failback Before Primary SUS During Secondary After Primary Secondary GC GC or MM Primary CR OOS Example 2: Failback original Target

Advanced Technical Support, Americas © 2005 IBM Corporation 50 Failover and Failback Command Parameters  For both failover and failback, the Primary and Secondary parameters must reflect the “new direction” of the copy operation A B Primary Secondary Serial Serial ABC2A To issue FAILOVER to B: CESTPAIR FO DEVN(B) PRI(ABC2A) SEC(85551) failoverpprc –dev abc2a –remotedev b:a A B Primary Serial Serial ABC2A To issue FAILBACK to A: CESTPAIR FB DEVN(A) PRI(85551) SEC(ABC2A) failbackpprc –dev –remotedev abc2a a:b

Advanced Technical Support, Americas © 2005 IBM Corporation 51 Addendum  Management Tools  Establish Paths  Establish Pairs  FREEZE/RUN  Resynchronizing Pairs  Path and Pair Status  Line speeds  References

Advanced Technical Support, Americas © 2005 IBM Corporation 52 Management Tools Runs on z/OS Runs on Open Server Manages CKDManages FB TSOYesNoYesYes (1) APIYesNoYesYes (1) ICKDSFYesNoYesNo DS CLINoYes TPC for RNoYes GDPSYesNoYesYes (1) Note: 1. A CKD unit address (and host UCB) must be defined in the same DS8000/DS6000 server against which host I/O may be issued to manage FB LUNs.

Advanced Technical Support, Americas © 2005 IBM Corporation 53 Establish Paths  DS CLI mkpprcpath –dev storage_image_id –remotedev storage_image_id –remotewwnn wwnn –srclss source_LSS_ID –tgtlss target_LSS_ID source_port_ID:target_port_ID  ICKDSF PPRCOPY ESTPATH UNIT(ccuu) FCPPATHS(X’aaaabbbb’) PRIMARY(ssid,ser#) SECONDARY(ssid,ser#) LSS(X’pp’,X’ss’) WWNN(pwwnn,swwnn)  TSO CESTPATH DEVN(device_number) PRIM(ssid wwnn lss) SEC(ssid wwnn lss) LINK(aaaabbbb)

Advanced Technical Support, Americas © 2005 IBM Corporation 54 Establish Pairs  DS CLI mkpprc –dev storage_image_ID –remotedev storage_image_ID –type gcp –mode full SourceVolumeId:TargetVolumeId  ICKDSF PPRCOPY ESTPAIR DDNAME(dname) PRIMARY(ssid,ser#,cca) SECONDARY(ssid,ser#,cca) LSS(X’pp’,X’ss’) MODE(COPY) OPTION(XD)  TSO CESTPAIR DEVN(device_number) PRIM(ssid serialno cca lss) SEC(ssid serialno cca lss) MODE(COPY) OPTION(XD)

Advanced Technical Support, Americas © 2005 IBM Corporation 55 FREEZE  DS CLI freezepprc –dev storage_image_ID –remotedev storage_image_ID Source_LSS_ID:Target_LSS_ID  ICKDSF PPRCOPY FREEZE DDNAME(dname) PRIMARY(ssid,ser#) SECONDARY(ssid,ser#) LSS(X’pp’,X’ss’)  TSO CGROUP DEVN(device_number) PRIM(ssid serialno lss) SEC(ssid serialno lss) FREEZE

Advanced Technical Support, Americas © 2005 IBM Corporation 56 RUN  DS CLI unfreezepprc –dev storage_image_ID –remotedev storage_image_ID Source_LSS_ID:Target_LSS_ID  ICKDSF PPRCOPY RUN DDNAME(dname) PRIMARY(ssid,ser#) SECONDARY(ssid,ser#) LSS(X’pp’,X’ss’)  TSO CGROUP DEVN(device_number) PRIM(ssid serialno lss) SEC(ssid serialno lss) RUN

Advanced Technical Support, Americas © 2005 IBM Corporation 57 Resynchronizing Pairs  DS CLI resumepprc –dev storage_image_ID –remotedev storage_image_ID –type gcp SourceVolumeId:TargetVolumeId  ICKDSF PPRCOPY ESTPAIR DDNAME(dname) PRIMARY(ssid,ser#,cca) SECONDARY(ssid,ser#,cca) LSS(X’pp’,X’ss’) MODE(RESYNC) OPTION(XD)  TSO CESTPAIR DEVN(device_number) PRIM(ssid serialno cca lss) SEC(ssid serialno cca lss) MODE(RESYNC) OPTION(XD)

Advanced Technical Support, Americas © 2005 IBM Corporation 58 Path Status DS CLI lspprcpath –dev storage_image_ID Source_LSS_ID

Advanced Technical Support, Americas © 2005 IBM Corporation 59 Pair Status DS CLI lspprc –dev storage_image_ID –remotedev storage_image_ID –l SourceVolumeId:TargetVolumeId

Advanced Technical Support, Americas © 2005 IBM Corporation 60 Path Status ICKDSF PPRCOPY QUERY DDNAME(dname) PATHS ICK00700I DEVICE INFORMATION FOR 5C11 IS CURRENTLY AS FOLLOWS: PHYSICAL DEVICE = 3390 STORAGE CONTROLLER = 3990 STORAGE CONTROL DESCRIPTOR = E9 DEVICE DESCRIPTOR = 0A ADDITIONAL DEVICE INFORMATION = 4A001B35 TRKS/CYL = 15, # PRIMARY CYLS = 150 ICK04030I DEVICE IS A PEER TO PEER REMOTE COPY VOLUME QUERY REMOTE COPY – PATHS PRIMARY CONTROL UNIT INFORMATION SERIAL WORLD WIDE NUMBER SSID LSS NODE NAME C01F4C SECONDARY CONTROL UNIT INFORMATION SERIAL WORLD WIDE NUMBER SSID LSS NODE NAME PATHS: SERIAL WORLD WIDE NUMBER SSID LSS NODE NAME PATH SAID DEST S* ST: C01F4C 1 002C 00AC AC 002C 13 2ND: RD: TH:

Advanced Technical Support, Americas © 2005 IBM Corporation 61 Pair Status ICKDSF PPRCOPY QUERY UNIT(ccuu) ICK00700I DEVICE INFORMATION FOR 5C11 IS CURRENTLY AS FOLLOWS: PHYSICAL DEVICE = 3390 STORAGE CONTROLLER = 3990 STORAGE CONTROL DESCRIPTOR = E9 DEVICE DESCRIPTOR = 0A ADDITIONAL DEVICE INFORMATION = 4A001B35 TRKS/CYL = 15, # PRIMARY CYLS = 150 ICK04030I DEVICE IS A PEER TO PEER REMOTE COPY VOLUME QUERY REMOTE COPY - VOLUME (PRIMARY) (SECONDARY) SSID CCA SSID CCA DEVICE LEVEL STATE PATH STATUS SER # LSS SER # LSS C11 PRIMARY DUPLEX ACTIVE PATHS SAID/DEST STATUS DESCRIPTION C 00AC 13 ESTABLISHED FIBRE CHANNEL PATH 00AC 002C 13 ESTABLISHED FIBRE CHANNEL PATH ICK02206I PPRCOPY QUERY FUNCTION COMPLETED SUCCESSFULLY ICK00001I FUNCTION COMPLETED, HIGHEST CONDITION CODE WAS 0

Advanced Technical Support, Americas © 2005 IBM Corporation 62 Path Status TSO CQUERY DEVN(device_number) PATHS

Advanced Technical Support, Americas © 2005 IBM Corporation 63 Pair Status TSO CQUERY DEVN(device_number)

Advanced Technical Support, Americas © 2005 IBM Corporation 64 Pair Status ISMF To Further Limit the Generated List, Specify a Single Value or List of Values in any of the following: Rel Op Value Value Value Value Cache Fast Write Status.. CF Volume Status Dasd Fast Write Status... Duplex Status Index Status Physical Status Read Cache Status..... Shared Dasd Use Attributes

Advanced Technical Support, Americas © 2005 IBM Corporation 65 Pair Status ISMF Use ENTER to continue, END to exit Help. HELP DUPLEX STATUS (Page 2 of 2) HELP COMMAND ===> PPRIMARY The volume is primary of a PPRC pair. PSECNDRY The volume is secondary of a PPRC pair. PPRI-PEN The volume is primary of a PPRC pair in the process of being established. PSEC-PEN The volume is secondary of a PPRC pair in the process of being established. PPRI-SUS The volume is primary of a PPRC pair that is suspended. PSEC-SUS The volume is secondary of a PPRC pair that is suspended. PPRI-FAI The volume is primary of a PPRC pair in fail status. PSEC-FAI The volume is secondary of a PPRC pair in fail status.

Advanced Technical Support, Americas © 2005 IBM Corporation 66 Line Speeds MbpsApproximateMB/SecEquivalent T1 lines T T OC OC OC

Advanced Technical Support, Americas © 2005 IBM Corporation 67 IBM Copy Services Technologies – DS6K/DS8K Primary Site A Metro Site B Out of Region Site C Metro / Global Mirror  Three site synchronous and asynchronous mirroring  Available on:  DS8000, ESS  N Series FlashCopy  Point in time copy  Available on:  DS8000, DS6000, ESS  SAN Volume Controller  DS4000  N Series Within Storage System Out of Region Site B Primary Site A Global Mirror  Asynchronous mirroring  Available on:  DS8000, DS6000, ESS  SAN Volume Controller  DS4000  N Series Primary Site A Metro distance <300km Site B Metro Mirror  Synchronous mirroring  Available on:  DS8000, DS6000, ESS  SAN Volume Controller  DS4000  N Series

Advanced Technical Support, Americas © 2005 IBM Corporation 68 Copy Services Matrix GMz 10 (XRC) Primary GMz 10 (XRC) Secondary Metro Mirror or Global Copy Primary Metro Mirror or Global Copy Secondary Global Mirror Primary Global Mirror Secondary FlashCopy Source FlashCopy Target Incremental FLC Source Incremental FLC Target Concurrent Copy Source GMz 10 (XRC) Primary NoYes 11 YesNoYesNoYes No Yes GMz 10 (XRC) Secondary Yes 11 NoYesNo 5 YesNo 5 YesNo 5 YesNo 5 Yes Metro Mirror or Global Copy Primary Yes NoYes 1 No 6 Yes 1 Yes No 6 Yes Metro Mirror or Global Copy Secondary NoNo 5 Yes 1 NoYes 1 NoYesYes 8 Yes No Global Mirror Primary Yes No 6 Yes 1 No Yes Global Mirror Secondary NoNo 5 Yes 1 No YesYes 8 Yes 9 No FlashCopy Source Yes Yes 3,4 Yes 4 Yes 3 NoYes FlashCopy Target NoNo 5 Yes 2 No Yes 4 No Incremental FLC Source No 7 Yes Yes 9 YesNo Yes Incremental FLC Target No 7 No 5 Yes 2 No Yes Concurrent Copy Source Yes NoYesNoYes Device Is May Become

Advanced Technical Support, Americas © 2005 IBM Corporation 69 Notes: 1.Only in a Metro/Global Copy (supported on ESS) or a Metro/Global Mirror Environment (supported on ESS and DS8000). 2.FlashCopy V2 at LIC and higher on ESS800 (DS6000 and DS8000 utilize FlashCopy V2 by default). –You must specify the proper parameter to perform this –Metro Mirror primary will go from full duplex to copy pending until all of the flashed data is transmitted to remote –Global Mirror primary cannot be a FlashCopy target 3.FlashCopy V2 Multiple Relationship. 4.FlashCopy V2 Data Set FlashCopy (only available for z/OS volumes). 5.The Storage Controller will not enforce this restriction, but it is not recommended. 6.A volume may be converted between the states Global Mirror primary, Metro Mirror primary and Global Copy primary via commands, but it two relations cannot exist at the same time (i.e. multi- target). 7.GMz (XRC) Primary, Global Mirror Secondary, Incremental FlashCopy Source and Incremental FlashCopy Target all use the Change Recording Function. For a particular volume only one of these relationships may exist. 8.Updates to the affected extents will result in the implicit removal of the FlashCopy relationship, if the relationship is not persistent. 9.This relationship must be the FlashCopy relationship associated with Global Mirror – i.e. there may not be a separate Incremental FlashCopy relationship. 10.Global Mirror for zOS (GMz) is supported on ESS and DS In order to ensure Data Consistency, the XRC Journal volumes must also be copied.

Advanced Technical Support, Americas © 2005 IBM Corporation 70 References  SC DS8000 Command-Line Interface User’s Guide  GC DS6000 Command-Line Interface User’s Guide  SC DFSMS Advanced Copy Services  SG IBM System Storage DS8000 Series: Copy Services in Open Environments  SG IBM System Storage DS8000 Series: Copy Services with IBM System z  SG IBM System Storage DS6000 Series: Copy Services in Open Environments  SG DS6000 Series: Copy Services with IBM System z Servers  Performance White Paper – 1.ibm.com/sales/systems/portal/_s.155/254?navID=f320s260&geoID=All&prodID=System%20Storage&do cID=ditlDS8000PerfWPPower5http://w3- 1.ibm.com/sales/systems/portal/_s.155/254?navID=f320s260&geoID=All&prodID=System%20Storage&do cID=ditlDS8000PerfWPPower5  DS8000/DS6000 Copy Services: Getting Started –WP – 03.ibm.com/support/techdocs/atsmastr.nsf/WebDocs/?Search&Query=[HTMLDocumentName=WM*]+AN D+(burger)&Start=1&Count=50&SearchOrder=1&SearchMax=10000http://www- 03.ibm.com/support/techdocs/atsmastr.nsf/WebDocs/?Search&Query=[HTMLDocumentName=WM*]+AN D+(burger)&Start=1&Count=50&SearchOrder=1&SearchMax=10000

Advanced Technical Support, Americas © 2005 IBM Corporation 71 Trademarks The following terms are trademarks or registered trademarks of the IBM Corporation in either the United States, other countries or both. Linear Tape-Open, LTO, LTO Logo, Ultrium logo, Ultrium 2 Logo and Ultrium 3 logo are trademarks in the United States and other countries of Certance, Hewlett-Packard, and IBM. Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States and/or other countries. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States and/or other countries. Intel, Intel Inside (logos), MMX and Pentium are trademarks of Intel Corporation in the United States and/or other countries. UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a trademark of Linus Torvalds in the United States and other countries. Other company, product, or service names may be trademarks or service marks of others.  AIX  AIX 5L  BladeCenter  Chipkill  DB2  DB2 Universal Database  DFSMSdss  DFSMShsm  DFSMSrmm  Domino  e-business logo  Enterprise Storage Server  ESCON  eServer  FICON  FlashCopy  GDPS  Geographically Dispersed Parallel Sysplex  HiperSockets  i5/OS  IBM  IBM eServer  IBM logo  iSeries  Lotus  ON (button device)  On demand business  OnForever  OpenPower  OS/390  OS/400  Parallel Sysplex  POWER  POWER5  Predictive Failure Analysis  pSeries  S/390  Seascape  ServerProven  System z9  System p5  System Storage  Tivoli  TotalStorage  TotalStorage Proven  TPF  Virtualization Engine  X-Architecture  xSeries  z/OS  z/VM  zSeries