John Sing/San Jose/IBM April 4, 2012

Slides:



Advertisements
Similar presentations
System Integration and Performance
Advertisements

Tag line, tag line Protection Manager 4.0 Customer Strategic Presentation March 2010.
© 2010 IBM Corporation ® Tivoli Storage Productivity Center for Replication Billy Olsen.
Consistency and Replication Chapter 7 Part II Replica Management & Consistency Protocols.
1 Disk Based Disaster Recovery & Data Replication Solutions Gavin Cole Storage Consultant SEE.
Determining BC/DR Methods Recovery Time Objective – (RTO) Time needed to recover from a disaster How long can you afford to be without your systems Recovery.
A Presentation for the Enterprise Architect © 2008 IBM Corporation IBM Technology Day - SOA SOA Governance Miroslav Petrek IT Software Architect
1 © Copyright 2010 EMC Corporation. All rights reserved. EMC RecoverPoint/Cluster Enabler for Microsoft Failover Cluster.
Information Means The World.. Enhanced Data Recovery Agenda EDR defined Backup to Disk (DDT) Tape Emulation (Tape Virtualization) Point-in-time Copy Replication.
CS 333 Introduction to Operating Systems Class 18 - File System Performance Jonathan Walpole Computer Science Portland State University.
Chapter 1 and 2 Computer System and Operating System Overview
Module – 11 Local Replication
Multiple Replicas. Remote Replication. DR in practice.
Module – 12 Remote Replication
Section 3 : Business Continuity Lecture 29. After completing this chapter you will be able to:  Discuss local replication and the possible uses of local.
1© Copyright 2011 EMC Corporation. All rights reserved. EMC RECOVERPOINT/ CLUSTER ENABLER FOR MICROSOFT FAILOVER CLUSTER.
1© Copyright 2012 EMC Corporation. All rights reserved. November 2013 Oracle Continuous Availability – Technical Overview.
National Manager Database Services
Storwize V7000 IP Replication solution explained
IBM TotalStorage ® IBM logo must not be moved, added to, or altered in any way. © 2007 IBM Corporation Break through with IBM TotalStorage Business Continuity.
Remote Replication Chapter 14(9.3) ISMDR:BEIT:VIII:chap9.3:Madhu N:PIIT1.
Backup & Recovery 1.
November 2009 Network Disaster Recovery October 2014.
Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.
© 2006 EMC Corporation. All rights reserved. Business Continuity: Remote Replication Module 4.4.
IBM Maximo Asset Management © 2007 IBM Corporation Tivoli Technical Exchange Calls Aug 31, Maximo - Multi-Language Capabilities Ritsuko Beuchert.
Technology Spotlight on
CHAPTER 2: COMPUTER-SYSTEM STRUCTURES Computer system operation Computer system operation I/O structure I/O structure Storage structure Storage structure.
11g(R1/R2) Data guard Enhancements Suresh Gandhi
DATABASE MIRRORING  Mirroring is mainly implemented for increasing the database availability.  Is configured on a Database level.  Mainly involves two.
DB-2: OpenEdge® Replication: How to get Home in Time … Brian Bowman Sr. Solutions Engineer Sandy Caiado Sr. Solutions Engineer.
Data Center Back-up: Data Sustainability on a Budget Mike DeNapoli Enterprise Systems Engineer Double-Take Software.
Copyright © 2014 EMC Corporation. All Rights Reserved. VNX Block Local Replication Principles Upon completion of this module, you should be able to: Explain.
Continuous Access Overview Damian McNamara Consultant.
Storage 101: Bringing Up SAN Garry Moreau Senior Staff Alliance Consultant Ciena Communications (763)
© 2006 IBM Corporation Flash Copy Solutions im Windows Umfeld TSM for Copy Services Wolfgang Hitzler Technical Sales Tivoli Storage Management
Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.
© 2011 IBM Corporation IBM i with External Storage Customer References Jana Jamsek ATS Europe.
© 2007 IBM Corporation SOA on your terms and our expertise Software WebSphere Process Server and Portal Integration Overview.
© 2009 EMC Corporation. All rights reserved. Remote Replication Module 3.4.
High Availability in DB2 Nishant Sinha
Peter Mattei HP Storage Consultant 16. May 2013
CS333 Intro to Operating Systems Jonathan Walpole.
Click to add text Introduction to the new mainframe: Large-Scale Commercial Computing © Copyright IBM Corp., All rights reserved. Chapter 6: Accessing.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 4 Computer Systems Review.
 The End to the Means › (According to IBM ) › 03.ibm.com/innovation/us/thesmartercity/in dex_flash.html?cmp=blank&cm=v&csr=chap ter_edu&cr=youtube&ct=usbrv111&cn=agus.
Virtual Machine Movement and Hyper-V Replica
© 2009 IBM Corporation Statements of IBM future plans and directions are provided for information purposes only. Plans and direction are subject to change.
How to setup DSS V6 iSCSI Failover with XenServer using Multipath Software Version: DSS ver up55 Presentation updated: February 2011.
DISASTER RECOVERY PLAN By: Matthew Morrow. WHAT HAPPENS WHEN A DISASTER OCCURS  What happens to a business during a disaster?  What steps does a business.
© 2014 IBM Corporation sBC10 - Simplify and Automate your Copy Services Environment with TPC for Replication Charlie Burger – Certified I/T Storage Specialist.
IBM Systems & Technology Group © 2004 IBM Corporation Global Copy, Metro Mirror, Global Mirror, and Metro/Global Mirror Overview Charlie Burger Storage.
IPEmotion License Management PM (V1.2).
sBC09 - Business Resiliency Solutions with the DS8870
IBM ATS Storage © 2013 IBM Corporation What are Consistency Groups ? The concept of grouping all system, middleware, and application volumes that are required.
Advanced Technical Support (ATS) Americas © 2007 IBM Corporation What is FlashCopy? FlashCopy® is an “instant” T0 (Time 0) copy where the source and target.
Database Recovery Techniques
Jonathan Walpole Computer Science Portland State University
Metro Mirror, Global Copy, and Global Mirror Quick Reference
Tivoli Storage Manager Product Family
Test C : IBM Midrange Storage Sales V3
iSCSI Storage Area Network
Determining BC/DR Methods
High Availability Options with Storage
SQL Server High Availability Amit Vaid.
Overview Continuation from Monday (File system implementation)
DS6000/DS8000 FlashCopy in z/OS Environments Quick Reference
Using the Cloud for Backup, Archiving & Disaster Recovery
Presentation transcript:

John Sing/San Jose/IBM April 4, 2012 Copy / refresh data off back end of IBM DS8000 Global Mirror, using Global Copy Prepared for IBM Strategic Outsourcing

31 years of experience with IBM in high end servers, storage, and software 2009 - Present: IBM Systems Group Executive IT Consultant – IT Strategy and Planning, Enterprise Storage, Big Data Analytics, HA/DR/BC, WW Technical Marketing 2002-2008: IBM WW Business Continuity, IT HA/DR, IT Strategy 1998-2001: IBM Storage Subsystems Group - Enterprise Storage Server Marketing Manager, Planner for ESS Copy Services (FlashCopy, PPRC, XRC, Metro Mirror, Global Mirror) 1994-1998: IBM Hong Kong, IBM China Marketing Specialist for High-End Storage 1989-1994: IBM USA Systems Center Specialist for High-End S/390 processors 1982-1989: IBM USA Marketing Specialist for S/370, S/390 customers (including VSE and VSE/ESA) singj@us.ibm.com You may follow my daily IT research mini-blog http://www.delicious.com/atsf_arizona IBMers may access my IBM Intranet webpages: http://snjgsa.ibm.com/~singj/ John Sing

Migrating data with Global Copy Disk subsystem based asynchronous replication solution Supported on ESS, DS6000 and DS8000 for all data types Failover/failback capability reduces requirements for full copy Incremental changes copied after a test and in case of return to original location Bitmaps used to track data required to be sent Minimal performance impact and does not use additional cache Global Copy With Global Copy an outage is required to create consistent copy Using Global Mirror or converting to Metro Mirror allows migration to be tested without production outage Supports very long distances Migrations performed from UK to Germany, Chile to Spain etc Fibrechannel connectivity between disk subsystems Extension equipment used to provide connection over WAN

Definitions: Global Copy Remote Source Data migration / copy Use Global Copy to effect large data movement With minimal relocation effort and minimal amount of volumes B M Setup DS8000 Global Copy environment Start and monitor initial copy B to M SUSPEND with Go-To-Sync Consistency Group Establish incremental change tracking on both B and M Test on M Once test over, reset and incrementally resync B to M for next test of cutover. Automatically resets M properly

DS8000 Global Copy Operation PRIMARY SECONDARY Global Copy Non-synchronous Transfer Objective: efficiently move data with low overhead, keep the line highly utilized VOLUME B VOLUME M (Fuzzy) DS8000, DS6000, ESS Global Copy is continuous cycle through volume bitmap Updates to tracks/sectors on volume/LUN noted by bitmap Send only changed tracks, using pre-deposit write If all changes in cache, send changed sectors only No host I/O wait for updates to be sent to the secondary 1 4 5 3 Note that DS8000 Global Copy does not send data in same order it was written Objective is to move large amounts of data with low overhead, and keep telecom line very well utilized Global Copy is continuous cycle through volume bitmap (continually stays in PPRC Establish Phase 1) Any updates to tracks/sectors on volume/LUN are noted by turning bit on in bitmap Non-synchronous copy operation - no host I/O wait for updates to be sent to the secondary A queuing algorithm schedules each volume for subsequent scans of their changed bitmap Send only changed tracks dataDS8000 Global Copy (PPRC-XD) If all changes in cache, send changed sectors only Global Copy sends data in performance sequence Result is 'fuzzy copy' at Remote site Consistency can be created by command Quiesce workload at site A and allow B to catch up or… Go to Sync (goto PPRC Establish Phase 2) or Add the Global Copy volumes to a Global Mirror session 2 Note that the remote site data is in a 'fuzzy’ state VOL B PPRC BITMAP VOL M PPRC BITMAP ‘M’ consistency can be created by: Quiesce workload at site B and allow M to catch up Go to Sync (command copying to become Metro Mirror) Add the Global Copy volumes to a DS8000 Global Mirror session Incoming Writes

General Data Center migration concept using DS8000 Global Copy Future PROD location Production B M Overview of Global Copy use for data center migration Setup Global Copy (PPRC-XD) environment Start and monitor initial copy B to M SUSPEND with failover/failback change tracking on both B and M Test cutover on M Once test over, reset and incrementally resync B to M for next test of cutover. This automatically resets M properly After multiple successful tests, execute cutover We will adapt this methodology for use in your GM environment

Other ways to use DS8000 storage replication Point-in-Time followed by remote mirror A B C D Point-in-Time followed by remote mirror followed by Point-in-Time

Other ways to use DS8000 storage replication - 2 B Point-in-Time from a remote mirror primary A C D Make a Point-in-Time safety copy of remote mirror

Other ways to use DS8000 replication Async mirroring for Out of region recovery G

Other ways to use DS8000 replication Migrate data from both ends of the A to C D/R link Older generation A devices to newer generation E devices Older generation C devices to newer generation G devices Without impacting D/R protection on A to C link A C E G

Incremental Resynchronisation bitmaps The Global Copy and Global Mirror incremental resynchronisation functions allows bitmaps to be created for sending only changed data Avoids ever having to do full copy Disk subsystems can track what data blocks have changed, and can send only changed data Minimizes time, saves bandwidth

Notes Assume that building automation of all these steps is a necessity Migrate a test systems first, to get experience that can be applied to subsequent migration of large systems

Notes Circumventions may be possible to avoid using telecom to do initial loads over telecom By doing tape dumps followed by resyncs Those possibilities are outside the scope of this version of this document

Initial load of data to volumes at Cloud for testing

Start testing of migration data to Raleigh Original While GM continues to run Start Global Copy pairs B to M B = GC primaries M volumes = GC secondaries M volumes = New Production volumes for testing and eventual production This does initial load of M D/R GM B A GM CG volumes C GC Practice D/R Test volumes D M GM = Global Mirror GC = Global Copy Raleigh

Bring data to consistent state for test at Raleigh Goals are: No production outage or impact Minimize impact to DR protection to greatest extent reasonable

Definitions: Failover/Failback Suspend Is always issued to a secondary volume Tells that volume that "now you're a suspended primary" Starts change tracking Is never issued to a primary volume You'll get a message: "improper state" You do specify the secondary and primary volumes Does not have to have paths/connections available to the primary, will work if there's no connections Failback: Is issued to whichever volume is going to become the new primary That new primary then communicates to the secondary volume You always must specify which volume is primary and which volume is secondary Hence, must have paths and connections active and avail to the secondary, else command won't complete Suspend Stops mirroring of data to the secondary volume Starts keeping record of the primary volume tracks that are updated. That info will be used later when the pair is re-established, to copy just the updated tracks You specify the primary and secondary volumes to be suspended ‘B‘ Primary ‘M’ Secondary

Initial starting point While GM runs: Global Copy is copying changes B to M thus keeping M quasi-current Data lag at M is short Only a few seconds Original D/R GM B A GM CG volumes C GC Practice D/R Test volumes D M GM = Global Mirror GC = Global Copy Raleigh Green = prod Orange = inconsistent data Blue – consistent data

Start process to bring M volumes to consistent state Issue Pause GM session A->B GM stops forming consistency groups Issue Suspend to GC pairs at A Starts incremental change tracking at A Original D/R GM session Pause and Suspend GC pairs B A GM CG volumes C GC Practice D/R Test volumes D If Global Mirror is running between A and B, to suspend the Mirroring, there are actually two sessions to suspend: The Global Mirror session The Global Copy session that underlies the Global Mirror session Both sessions must be suspended M Raleigh Green = prod Orange = inconsistent data Blue – consistent data

Incrementally forward Consistency Group C volumes to Raleigh M At B: Issue Suspend with Failover to B volumes Makes B suspended primary; note that B to M is still running Restore GM CG onto B Revertible check FlashCopies Fast Reverse Restore C to B Recreate FlashCopies B to C Wait till B->M Global Copy has reached zero out of sync tracks Incrementally forward Consistency Group C volumes to Raleigh M Original D/R GM session Pause and Suspend GC pairs B A GM CG volumes C GC Practice D/R Test volumes D If Global Mirror is running between A and B, to suspend the Mirroring, there are actually two sessions to suspend: The Global Mirror session The Global Copy session that underlies the Global Mirror session Both sessions must be suspended Note: do not do Recover on GM, that puts B in Simplex, which we don’t want! M Raleigh RPO at D/R is aging while GM A-B is stopped However, this process to refresh M will be fast As M was only a few seconds behind B Therefore, elapsed time until GM A to B is restarted, will be short

Status of B and M after new CG copied thru B to M GC completes copy of incremental changes B to M Duration of the remaining incremental changes is proportional to: Amount of out of sync data not yet copied to M plus… Amount of any data that needs to be reset at M Soon, B =M Original D/R GM session Pause and Suspend GC pairs B A GM CG volumes C GC Practice D/R Test volumes D M Raleigh Green = prod Orange = inconsistent data Blue – consistent data

Suspend Global Copy pairs B->M As soon as B =M Issue Suspend to B volumes to start change tracking B to M Issue Failover to M volumes Makes M suspended primary Starts incremental change tracking at M Original D/R GM session Pause and Suspend GC pairs B A GM CG volumes GC suspended at B to M with change tracking on C Practice D/R Test volumes D M M Volumes become Suspended Primaries, with change tracking on Raleigh Green = prod Orange = inconsistent data Blue – consistent data

Restart GM Original - D/R Issue Global Copy Failback to A volumes Causes resync to start on GC pairs on A->B (incremental changes only sent) Then restart GM session A to B A standard GM resume / start D/R on Original – D/R is re-established Original D/R GC resync B A GM GM CG volumes C GC suspended at B to M with change tracking on Practice D/R Test volumes D M M Volumes are Suspended Primaries, with change tracking on Raleigh Green = prod Orange = inconsistent data Blue – consistent data

Copy data from M volumes to Cloud Original Can now copy consistent test data M to Cloud Changes are tracked on M For later reset during next refresh cycle D/R GM B A GC suspended at B to M with change tracking on Cloud GM CG volumes C Practice D/R Test volumes D M M Volumes are Suspended Primaries, with change tracking on Raleigh Green = prod Orange = inconsistent data Blue – consistent data

Original D/R GM B A C GC D M When finished with test, use Global Copy to refresh B to M, in prep for next refresh cycle While GM continues to run At B: Issue a Failback to B Makes B primary and M secondary Resync of changed data starts flowing again from B to M Also resets any data changed at M during test Original D/R GM B A GM CG volumes C GC Practice D/R Test volumes D M GM = Global Mirror GC = Global Copy Raleigh Green = prod Orange = inconsistent data Blue – consistent data

Thank You Thank You Gracias Tesekkurler Obrigado Grazie Danke Merci Hebrew Hindi Tesekkurler Traditional Chinese Thank You Thank You Russian Turkish Gracias English Spanish Arabic Obrigado Grazie Brazilian Portuguese Italian Japanese Some pronunciations (as best I can, for the ones I know): Chinese: xie-xie Russian: spa-si-ba Korean: kam-ya-ti-da Others welcome to be added to this chart over time. Thx to Curtis Neal/San Jose/IBM, for giving me the original version of this chart, which I’ve expanded a little. John Sing singj@us.ibm.com IBM Systems and Technology Group, San Jose, California, USA Februaryu 2009 Danke Thai Merci German French Simplified Tamil Chinese Korean