Download presentation
Presentation is loading. Please wait.
1
IBM DS8000 Data Replication Best Practices
Bob Kern – Jim Sedgwick – Hank Sautter –
2
Agenda DS8000 Copy Services - Best Practices
DS8000 Data Replication Technology Key Concepts Selecting the Right Solution to Match your Clients Business Needs Planning for Data Replication Configuration Data Collection BandWidth Studies Automation Case Studies – What you do not want to do….. Today we are going to discuss IBM data replication technology vs our competition from EMC and HDS. We will compare IBM XRC and GM vs EMC SRDF/A, HDS Universal Replicator as well as IBM XRC, FlashCopy and Metro Mirror vs the same architecture implementations on EMC and HDS subsystems. The information on the competition is presented as our understanding of the competitive products from public, published materials. If customer’s require ‘specific’ detail information on EMC or HDS disk subsystem Data Replication solutiuons, they should direct those questions to the specific vendor.
3
DS8000 Copy Services - Best Practices
Map solution to Customer’s Business Requirements (RTO/RPO) PiT Solution vs Continuous Mirror, Sync vs Async For several small customers doing PTAM -> a PiT Inc FC + GC can be attractive. High Availability z/OS Basic HyperSwap or GDPS HyperSwap Mgr, Dist. System – Software Dual Write across 2 Luns Metro Mirror on Same Data Center Floor -> outage, re-ipl minimizes outage time. Configuration Guidelines for Primary & Secondary Balance Primary & Secondary Performance (subsystems/cache/drives/etc.) 1:1 or 2:1 Configurations tend to be simpilier to configure & Manage. Be careful reusing “old” technology boxes as targets. Ability to Test D/R while Maintaining D/R Protection. Standard for most customer. Emerging Standard -> Test the Way the Recover & Recover the Way they Test. Failover/Failback Functionality Site Toggle can perhaps reduce D/R Test costs. Run 6 months in each site. Bandwidth Analysis -> Data Collection Use Tools: Disk Magic, RMF Magic etc. Analysis is VERY Important to understand BW for MB/Sec Update rates. MB/SEC Update Rates can also help customer Manager his business. Understand activity 24X7… Planning for Capacity Growth Initial Deployment should have some capacity Growth planned into Solution. Customer needs to understand how to do this as the years go by, workloads change etc. Review - D/R Lessons Learned White Paper: IBM Storage Infrastructure for Business Continuity – Updated This year Management Software -> TPC-R or GDPS DSCLI can be made to work, but can be very complicated. Best Implementations involve TPC-R or GDPS Software. Today we are going to discuss IBM data replication technology vs our competition from EMC and HDS. We will compare IBM XRC and GM vs EMC SRDF/A, HDS Universal Replicator as well as IBM XRC, FlashCopy and Metro Mirror vs the same architecture implementations on EMC and HDS subsystems. The information on the competition is presented as our understanding of the competitive products from public, published materials. If customer’s require ‘specific’ detail information on EMC or HDS disk subsystem Data Replication solutiuons, they should direct those questions to the specific vendor.
4
Map the Right Solution to the Clients Business Requirements
5
Aspects of Availability
CA = HA + CO Continuous Availability (CA)- Attribute of a system to deliver non disruptive service to the end user 7 days a week, 24 hours a day (there are no planned or unplanned outages). Continuous Operations (CO)- Attribute of a system to continuously operate and mask planned outages from end-users. It employs Non-disruptive hardware and software changes, non-disruptive configuration, software coexistence . High Availability (HA)– The attribute of a system to provide service during defined periods, at acceptable or agreed upon levels and masks unplanned outages from endusers. It employs Fault Tolerance; Automated Failure Detection, Recovery, Bypass Reconfiguration, Testing, Problem and Change Management Continuous Operations Non-disruptive backups and system maintenance coupled with continuous availability of applications High Availability Fault-tolerant, failure-resistant infrastructure supporting continuous application processing Disaster Recovery Protection against unplanned outages such as disasters through reliable, predictable recovery Protection of critical business data Operations continue after a disaster Recovery is predictable and reliable Costs are predictable and manageable
6
Business Continuity Tiers
Recovery from a disk image Recovery from tape copy Tier 7 – Site Mirroring with automated recovery AUTOMATION D/R Tier 6 - Disk mirroring (with/without automation) Cost DISK REPLICATION SW Tier 5 –Software replication SERVICES TAPE MANAGEMENT SW NETWORK Tier 4 - Point in Time disk copy When we discussed the hardware infrastructure, we noted that the challenge in most enterprises is that there is a wide variety of value points in data but there are generally a limited few cost points in the underlying storage infrastructure – making it difficult to effectively map the value of information to the appropriate cost of storage. There is a similar conversation with Advanced Copy Services. The same enterprise data that has different value points also has different recovery requirements. Recovery requirements are generally measured using 2 metrics. The first is the Recovery Point Objective (RPO). The RPO can be thought of as the degree of difference between the active online data and the disaster recovery copy of that data. A RPO of zero would mean that the primary copy and the disaster recovery copy are in exact synchronization. A failure would result in zero loss of data. Intuitively, this is what every IT manager would like to have. However, it is generally quite expensive to implement. Some, maybe all data (depending on your business) can stand a longer RPO – meaning that a failure would result in some transactional data being lost. The other metric is the Recovery Time Objective (RTO). The RTO is the amount of time after a failure that you are willing to spend before a given application or group of data is back up and available. A RTO of zero means that failures should cause zero disruption. Again, this is what most IT managers would love to have – if cost was not a factor. The thing we want to accomplish with Advanced Copy Services is to implement multiple levels of recoverability, with multiple levels of associated cost, so that IT managers can do a more effective job of mapping the value and recovery needs of their data to the most appropriate recovery capability. By design, IBM offers purpose-built advanced copy services all along this recovery hierarchy. (click) Ask questions to find out where customer is in above charts, where do they want to go Tier 3 - Electronic Vaulting Tier 2 - Hot Site, Restore from Tape 15 Min. 1-4 Hr.. 4 -8 Hr.. 8-12 Hr.. 12-16 Hr.. 24 Hr.. Days Tier 1 – Restore from Tape Recovery Time Objective Best Business Continuity practice is to blend solutions in order to maximize application coverage at optimum cost
7
Data Replication Enabling Core Technologies -
FlashCopy Internal Copy Available on: DS6000, DS8000, ESS SAN Volume Controller, XIV, DS4000, DS5000, PiT Incremental FlashCopy + Metro Mirror Available on: DS6000, DS8000, ESS Copy data command issued Copy is immediately available Primary Prod PPRC-XD Secondary PiTCopies Global Copy (Asynchronous) PiT Incremental FlashCopy FlashCopy WAN Read and write to both source and copy possible Write Read When copy is complete, relationship between source and target ends Time Optional background copy Source Target REMOTE PiT Incremental Copy
8
IBM Copy Services Technologies
Out of Region Site B Primary Site A Global Mirror Asynchronous mirroring Available on: DS8000, DS6000 SAN Volume Controller DS4000/DS5000 N Series FlashCopy Point in time copy DS8K FCSE Available on: DS8000, DS6000, SAN Volume Controller DS4000/DS5000 N Series XIV Primary Site A Metro distance <300km Site B Metro Mirror Synchronous mirroring DS8K HyperSwap Available on: DS8000, DS6000, ESS SAN Volume Controller DS4000/DS5000 N Series XIV Primary Site A Metro Site B Out of Region Site C Metro / Global Mirror Three site synchronous and asynchronous mirroring Available on: DS8000 N Series Within Storage System
9
Configuration Guidelines
10
Configuration Guidelines for Global Mirror
Whitepapers: Global Mirror GM Secondary TPC for Replication or GDPS Required Managing GM with dscli scripts is not a realistic customer option Do not undersize the GM Secondary Solution Planning Considerations Asymmecrical .vs symmetrical Volume size and layout LH, RH, RJ (A, B, C) are exactly the same size Failover & failback considerations More on layout later LSS Considerations – balance resources Dedicate two LSS(s) per DS8000 to an application, one even, one odd Tends to group volume numbers in ranges – easily recognizable ranges of volumes Management is easier with fewer LSSs Performance evaluation & Bandwidth Sizing between sites Use dedicated PPRC link ports, 2 minimum, on separate host adapters Testing !!
11
Global Mirror Prerequisite –> Performance at the Primary Site
Global Mirror Primary Performance must be good on volumes to be replicated Rank Performance Backend response times Frontend Write response times ≤ 2ms No cache issues Host Port Performance Good I/O balance across Ranks, Host Adapters Performance Tools: TPC for Disk RMF Magic Plan and Monitor for Growth Workloads increase Additional replication requirements New applications Global Mirror Local Site Global Mirror Primary Remote Site Global Mirror Secondary
12
Global Mirror Secondary - Performance Capacity
Ok. Recommended max number of GMir volumes Recommended max number of GMir volumes DS8000 LIC Release 3.0 or later ESS 800 1000 N/A DS6800 350 DS8000 / 16GB 1500 4500 DS8000 / 32GB DS8000 / 64GB 3000 9000 DS8000 / 128GB 6000 18000 DS8000 / 256GB 12000 36000 Equal to or Greater than the performance capacity of the primary Global Mirror places more write performance stress on the secondary More storage capacity Flash Copy processing Cache required per volume count Do not undersize the Secondary Global Mirror Primary Global Mirror Secondary ESS 800 ESS 800 with Arrays Across Loops DS8100 DS8300 DS8100 model 921 DS8100 model 931 DS8300 any model DS8300 model 922 DS8300 model 932 Global Mirror Local Site Global Mirror Primary Remote Site Global Mirror Secondary
13
Placement of B and C volumes
Secondary Primary B C B A C B C B C Same RPM Same RAID type On Secondary B and C copies on same rank results in hotspot being concentrated on single rank On Secondary All ranks contain equal numbers of B and C volumes B and C copies for particular volumes kept on separate ranks Activity for busy volumes spread over two ranks DDM size equal to or one size greater than primary
14
Placement of B, C and D volumes
D volumes placed on separate ranks to keep testing and backup activity separate B and C copies for particular volumes on different ranks This does reduce available ranks for B & C volumes All ranks contain equal numbers of B, C and D volumes B and C copies for particular volumes kept on separate ranks Placement of D volumes as shown above less critical but easier to manage/track/implement
15
FlashCopy Source & Target Placement
In general: Spread evenly across disk subsystems Within each disk subsystem, spread evenly across clusters Within each cluster, spread evenly across device adapters Within each device adapter, spread evenly across ranks Place FlashCopy target in same cluster as source If using BACKGROUND COPY, target on a different device adapter FlashCopy Space Efficient Use FlashCopy Space Efficient when economy is more important than performance and for short-lived relationships with low update rate on source volumes Short Term FlashCopy relationships Good for read only applications Tape Backup, 24 hour online backup, etc Cluster Device Adapter Rank FlashCopy Establish Performance Same cluster Doesn’t matter Different ranks Background Copy Performance Different device adapter FlashCopy Impact to Applications
16
FlashCopy SE Relationships
Full volume only NOCOPY only in first release Background copy cannot be initiated to a SE volume by any means Must specify “SE target ok” at establish Recommended Usage Use FlashCopy Space Efficient when economy is more important than performance and for short-lived relationships with low update rate on source volumes Short Term FlashCopy relationships Good for read only applications Tape Backup, 24 hour online backup, etc
17
Data Collection Performance and Bandwidth Studies
18
Data Collection Essentials
Historical Data Data selection is critical for obtaining valid results Single data points or averages are not very useful Size and duration of peaks are important Need to identify daily peaks and the workload profile over time End of month, end of quarter, end of year, etc. Identify active volumes for workload balancing Quantify expected growth Configuration details Production vs. test data Temporary data may not be part of Global Mirror Volume layout by array and storage pool Network configuration including bandwidth available Monitoring Performance and Status TPC Standard Edition RMF data (enable ESS data collection) TPC for Replication Global Mirror Monitor
19
Data Collection Essentials (2)
Data needed for evaluation Performance data for 1 week I/O rates, Data Rates, Response times Configuration details and event timelines (when are peaks expected) Data Sources RMF for zSeries iSeries PT reports Total Storage Productivity Center (TPC) reports iostat reports, windows perfmon reports Evaluation tools Disk Magic Capacity planning RMF Magic Performance evaluation DS8Qtool DS8000 physical and logical configuration details Contact IBM ATS for Performance and Bandwidth Studies add a link to partner world.
20
Lessons & Automation
21
Lessons Learned About IT Survival
Repeated Testing before a disaster is crucial to successful recovery after a disaster TTWYR – Test The Way You Recover RTWYT – Recover The Way You Test After a disaster, everything is different Staff well-being will be 1st priority Company will benefit greatly from well-documented, tested, available and automated (to the extent possible) recovery procedures May be necessary to implement in-house D/R solution to meet RTO/RPO Plan geographically dispersed IT facilities IT equipment, control center, offices, workstations, phones, staff, . . . Network entry points Installed server capacity at second data center can be utilized to meet normal day-to-day needs Failover capacity can be obtained by Prioritizing workloads Exploit new technology: Capacity Back Up (CBU) Data backup planning and execution must be flawless Disk mirroring required for <12hr RTO (need 2x capacity) Machine-readable data can be backed up; not so for paper files Check D/R readiness of critical suppliers, vendors Repeated Testing before a disaster is crucial to successful recovery after a disaster TTWYR – Test The Way You Recover RTWYT – Recover The Way You Test After a disaster, everything is different Company will benefit greatly from well-documented, tested, available and automated … recovery procedures After 9/11, studies were done on how companies managed their Disaster Recovery. These are the key lessons learned. At the top of the list is the fact that your staff has other priorities, they may not be reachable for several hours, or may not have survived the disaster. GDPS is a solution, not a technology. With the GDPS automation in place, all that is needed is one command to be issued by an operator at the D/R site, then just watch the automation enable the D/R site and start up the production workload. Other key points include: You need truly redundant network entry points and verify that diverse routes do not go through same area. You need not only a good D/R plan, but suppliers/vendors need to be ready as well. They may be impacted by the same event.
22
Automation: Critical for successful rapid recovery & continuity
The benefits of automation: Allows business continuity processes to be built on a reliable, consistent recovery time Recovery times can remain consistent as the system scales to provide a flexible solution designed to meet changing business needs Reduces infrastructure management cost and staffing skills Reduces or eliminates human error during the recovery process at time of disaster Facilitates regular testing to help ensure repeatable, reliable, scalable business continuity Helps maintain recovery readiness by managing and monitoring the server, data replication, workload and the network along with the notification of events that occur within the environment Automation = Good. No automation = Bad. Simple. Automate - Automate - Automate
23
Tivoli Storage Productivity Center for Replication (TPC-R) Overview
Replication management solution Simplified replication management & monitoring Powerful commands and logic Multiple Storage subsystems DS8000, DS6000, ESS800, SVC Multiple logical volume types Open systems (FB) LUNs z/OS (CKD) volumes Multiple replication types FlashCopy Metro Mirror Global Mirror Metro/Global Mirror High performance and scalability TPC for Disk, TPC for Data and TPC for Fabric are not required but can coexist on the same server Shared instance of DB2 Shared SNMP port
24
TPC Replication Manager
DS6000, DS8000 support Global Mirror Support Replication Progression Monitoring High Availability Disaster Recovery Automation (failover, failback) Setup Copy Sessions Execute Copy Operations Monitor Copy Status Manage/Monitor Consistent Groups Alert Operations on Exceptions / Failures TPC For Replication Primary/Source Site Second/Target Site DS8000 DS8000 DS6000 DS6000 ESS ESS SAN Volume Controller SAN Volume Controller Customers today have to go through the CLI interface on the DS6000 and DS8000 to be able to do replication. This is possible with a small number of replication sessions but when it comes to controlling several replication pairs it is much easier to use the graphical interface provided by TPC for Replication V3.1. SVC provides today a graphical interface for Metro Mirror replication. SVC will support Global Mirror sometime in 2Q or 3Q. TPC for Replication will support SVC Global Mirror in the first release of 2007. The number of customers we have today is very low because we only support ESS800 (which we don’t sell anymore) and SVC. Also, in previous releases we did not support Global Mirror, one of the main reasons to have TPC for Replication. Automated copy services configuration Central operations for copy services Operational status on copy services operations Assistance with recovery on failures
25
TPC for Replication GUI
My Work hyperlinks on left Display area for panels on right Select session, select action (from dropdown list) and GO Tables with hyperlinks and sortable columns Health Overview on every panel Session view Triangle Indicates application access (active host) Arrows between roles indicate direction of active replication
26
TPC-R Video Series (new on Techdocs)
Series of live demonstrations captured on video managing various environments Link to Summary of all the videos: Series includes the following demonstrations: CLI vs. TPC-R – (2:13) – adding copysets using CLI and TPC-R TPC-R 3.3 GM – (17:36) – GM setup with FO/FB using TPC-R 3.3 TPC-R 3.4 GM with Practice – (6:39) – GM with Practice Volumes setup with FO/FB TPC-R 4.1 overview – (14:24) – Overview of TPC-R with MM setup demonstration TPC-R 4.1 adding HW – (10:01) – how to add DS8000 and SVC to TPC-R TPC-R 4.1 MM setup – (19:20) – using TPC-R 4.1 to manage MM with FO/FB TPC-R 4.1 GM setup – (17:36) – Using TPC-R 4.1 to manage GM with FO/FB TPC-R 4.1 MM with Practice – (8:15) Using TPC-R 4.1 to create practice volumes What is TPC-R or TotalStorage Productivity Center for Replication? TPC-R coordinates the copy services functionality for FlashCopy, Metro Mirror and Global Mirror. TPC-R provides consistency group management for Metro Mirror and Global Mirror for ESS800, DS6000 and DS8000 models of IBM storage subsystems. TPC-R also provides consistency group management for Metro Mirror on SVC. This means that a data consistent point is available at the recovery site for disaster recovery or site switching. TPC-R makes it much easier on the storage administrator by providing a single point of control for all of your copy services needs. Control and monitoring of copy services relationships is available through a graphical user interface and a Command Line interface. It is no longer necessary to manage scripts and c-lists when TPC-R is used because TPC-R provides a persistent store of your volume information to be used for the hardware copy services relationships. TPC-R also provides an easy way to identify source and target volume matching to the TPC-R sessions. TPC-R will also raise SMNP events that a user can listen to when there is a change in a TPC-R session. The alerts may be used to determine any abnormalities in the copying process or to determine when data consistency has been achieved on all of the managed volumes. TPC-R assists in setting up, monitoring, and maintaining a copy services environment by providing site awareness, redundant TPC-R management server support, and the capability to do disaster recovery testing.
27
TPC-R Simplification: Starting a Global Mirror Copy
Using the DS8000 Hardware Commands Determine where to place Master GM session given the PPRC paths. Establish PPRC links between Master and Subordinate DS8000’s. Establish PPRC paths between A and B volumes Establish Subordinate sessions on the A volumes of the DS8000’s Establish a GC relationship between A and B Query A to determine first pass complete Establish Flash copy between B and C with incremental Add A to the subordinate Global Mirror session If first A volume on this DS8000, then start the Global Mirror Master with new configuration Monitor the Global Mirror Master with 051 queries and calculate RPO. Monitor for failures and fatal conditions Using TPC-R Commands START
28
TPC-R Simplification: Recover a Global Mirror Copy
Using the DS8000 Hardware Commands Establish PPRC B to A Failover Query all B to C Flash Copy relationships and determine if they are revertible and have the same sequence number If the sequence numbers are all the same AND at least one relationship is not revertible, issue a “withdraw Flash Copy with commit” to all of the revertible relationships If all of the Flash Copy relationships are Revertible, issue a “withdraw Flashcopy with revert” to all Flashcopy relationships. Issue “establish Flashcopy C to B” with Fast Reverse Restore Using TPC-R Commands RECOVER
29
The right level of business continuity protection for your business…
The right level of business continuity protection for your business….GDPS family of offerings Continuous Availability of Data within a Data Center Continuous Availability / Disaster Recovery Metropolitan Region Disaster Recovery at Extended Distance Continuous Availability Regionally and Disaster Recovery Extended Distance Single Data Center Applications remain active Two Data Centers Systems remain active Two Data Centers Three Data Centers Near-continuous availability to data Automated D/R across site or storage failure No data loss Automated Disaster Recovery “seconds” of Data Loss Data availability No data loss Extended distances GDPS end to end Server, workload, Data automation with a coordinated Network Switch has become the industry standard for Business Continuity within a single site, across two local sites and/or to an out of region data center. Continuous availability can be provided via GDPS/PPRC w/HyperSwap across two local data centers or within the same data center. GDPS will manage high availability resources; HyperSwap technology fully masks disk subsystem outages, Parallel Sysplex - CEC failures, Dual Clocks – single clock failures, Persistent Sessions – Session failures, CF Duplexing – Structure failures, Tape VTS PtP or TS7700 Grid – Tape failures. Out of region D/R protection can be provided via GDPS management of the IBM distance mirroring technology. SDM A B C GDPS/PPRC HyperSwap Manager GDPS/PPRC GDPS/PPRC HyperSwap Manager GDPS/GM GDPS/XRC GDPS/MGM GDPS/MzGM
30
The right level of protection for your business – Distributed Platforms
DR at extended distance GDPS/XRC Rapid systems recovery with only ‘seconds” of data loss K-sys SDM VCS VCS and GDPS DCM Agent GCO Site-1 Site-2 CA / DR within a metropolitan region GDPS/PPRC K-Sys Two data centers - systems remain active; designed to provide no data loss VCS or SA AppMan VCS or SA AppMan & Tivoli SA AppMan Platforms: IBM System p AIX 5.2, 5.3, 6.1, Linux: SUSE SLES 9,10 RedHat RHEL 4,5 IBM System x Linux: Suse SLES 9,10, RedHat RHEL 4,5; Windows 2003,2008 IBM System I Linux: Suse SLES 9,10, RedHat RHEL 4,5 IBM System z z/OS V1.7+, Linux: Suse SLES 9,10, RedHat 4,5 VMWARE ESX Win Server- Linux: Suse SLES 9,10, RedHat RHEL 4,5; Windows 2003, 2008 Ref IBM Tivoli System Automation 3.1 Installation & Customization Guide in the Release notes for a more detailed reference on GDPS DCM Supported configurations. GDPS automation can now also inter-operate with Geographically Dispersed Open Clusters (GDOC) automation, providing a single end to end automation point for an enterprise. When GDPS provides an alert to fail over some System z images, it can also failover various Open system clusters managed by Symantec or Tivoli AppMan clustering software. Data can be managed as a single CG, with GDPS solutions like GDPS/PPRC or GDPS/GM or GDPS/MGM or independently. GDOC will support most any software or hardware data replication available in the marketplace. So, if customers require a common restart point for an application than spans various platforms, GDPS can support that environment. Or, if the application requires some front end systems, but all transaction data is stored on say z/OS GDPS-GDOC automation can manage that environment as well. An end to end Enterprise Business Continuity Solution. Symantec VCS Platforms: IBM System P & pHype - AIX 5.3 IBM System x (Intel / AMD x86_64) - Suse SLES 9 & RH 4 HP (Itanium / PA RISC) – HP-UX SUN (SPARC) – Solaris 9 & 10. VMWare ESX 3.0 (Intel / AMD x86_64) - Suse SLES 9 & RH 4 & Windows AS & Windows 2300.
31
Case Study #1 Bandwidth Requirements
32
Data Replication Case #1 - Summary
Global Mirror consistency group formation fails Drain time exceeded Suspended volumes Link incidents (Frame transmission retries <1%) Configuration Details Two 9 Mbit links between sites 80 volumes mirrored 3:1 compression on links PPRC ports on the same HBA Operational Details Script written to monitor Out-of-Sync Tracks TPC for Disk used to provide performance data TPC-R used to automate Global Mirror Started with a few mirrored volumes – no issues with Global Mirror Increased number of volumes and workload Noticed host impact due to slow links Switched to Global Copy Large number of OOS Tracks
33
Data Replication Case #1 - Bandwidth Requirements
The following chart shows the mirrored write MB/s profile with indications of the Link Bandwidth required 3:1 Compression and 80% Link efficiency Exceeding the bandwidth will result in higher RPO Workload values above the “9 Mb” links indicate that the capacity of the link is exceeded 1-3 Distance Links do not handle the workload 4- Distance Links are sufficient for the current workload With some longer RPO times during peaks Should suspend GM during large peaks 8-12- Distance Links would be needed for the 25 MB/s peak Available bandwidth is 2 links Not dedicated Link timeouts occur when over-driven
34
4 Links = min required bandwidth 2 Links = available bandwidth
35
Data Replication Case #1 – Out-of-Sync Tracks
The 1st chart shows the PPRC Link activity Available Link bandwidth was insufficient Over driving the links resulted in redriving frames (timeout) Delays resulted in full track transfers instead of sending data from cache Full track transfers increased the bandwidth requirement The 2nd chart shows the OOS Tracks and suspended volumes Peak workload occurred at 2:00-3:00 am on most days Large number of OOS Tracks could not be copied before the next peak OOS Tracks must be low to for Consistency Groups to form OOS Tracks fully copied about 2 times per week
36
Full Track Transfers increase required bandwidth
Overdriving links causes delays CGs possible only when links are not overdriven
37
Overdriving links causes OOS Tracks and suspended volumes
Cannot “catch-up” before the next peak CGs possible only when OOS Tracks are low
38
Data Replication Case #1 – Conclusion
Bandwidth equivalent to 4 – “9 Mbit” links to handle the workload Sufficient for the “normal” workload May need to suspend Global Mirror during large peaks (measured 25 Mb/sec) Should consider future growth Follow Best Practices for Data Replication Links should be dedicated to guarantee required bandwidth and reduce link timeouts. PPRC Ports should be on dedicated cards and Do not share cards with host activity Do not put both links on the same HBA (single point of failure) 4 links on 2 HBAs would be preferred Monitor Status and Performance Use TPC for Disk to monitor link activity and workload growth Do not ignore TPC-R messages Test DR procedures
39
Case Study #2 Performance Requirements
40
Data Replication Case #2 - Summary
Global Mirror consistency group formation fails Drain time exceeded Host performance impacted while GM is active Configuration Details Distance between sites 37 miles / 60 km Link information GigE = 100 MB/sec Primary DS G 15K Drives Raid 5 Secondary DS G 15K Drives Raid 5 Operational Details TPC for Disk used to provide performance data TPC-R used to automate Global Mirror Increased workload since initial GM Design Noticed host impact when GM was implemented Switched to Global Copy
41
Data Replication Case #2 – Performance & Bandwidth
Bandwidth analysis Available bandwidth is not the cause of Global Mirror issues Usually MB/sec short peaks 50, 90 MB/sec GigE Link can handle 100 MB/sec without compression Current Global Mirror Configuration Follows Best Practices Volume placement is not the cause of performance issues PPRC ports are separate from Host ports (good) Should use every other port on HBA card for best performance Volumes spread over all Arrays (good) PPRC volumes share arrays with Flash Copy targets Do Not have the FlashCopy source and target in the same array Target should be in the came cluster Performance analysis Primary disk utilization is too high without any Mirror activity Secondary disk utilization is at the maximum when Global Mirror is active Flash Copy activity adds to HDD utilization When Global Copy is active HDD utilization decreases Raid 10 will reduce the HDD Utilization (double number of arms)
42
Global Mirror Global Copy
43
Part 1 :20 to :40 Global Mirror Caution level Warning level
44
Part 2 :35 to :45 Global Copy
45
GM Flash Copy adds to HDD activity Utilization is too high!
Part 1 :20 to :40 Global Mirror GM Flash Copy adds to HDD activity Utilization is too high!
46
Part 2 :35 to :45 Global Copy
47
Data Replication Case #2 – Conclusion
Previous Disk Magic Study Showed Good Results Original workload 4,800 I/O per second HDD Utilization 35% However the workload has grown 7,300 I/O per sec HDD Utilization >90% Other systems were added HDD Utilization is too high This condition will cause Host performance issues even without Global Mirror active Overloaded arrays at remote site causes Host performance issues when Global Mirror is active Added copy activity at the primary site causes Host performance issues when Global Copy is active Current workload has reached the limit of the current configuration Need to spread the workload over more arrays and use Raid 10 Need to plan for future growth Continue to use TPC Standard Edition to monitor performance
48
Further Assistance DS8000 Architecture & Best Practices - Replay
TIME Given: August 13; 11:00 a.m. New York, 4:00 p.m. London, 5:00 p.m. Paris, 15:00:00 GMT IBMers: BPs: Contact ATS: Business Partners: PARTNERWORLD CONTACT SERVICES (US & Canada) or fill out a request online: IBMer’s: Open Techline Request
49
Additional References
50
Data Replication Enabling Core Technologies -
FlashCopy Internal Copy Available on: DS6000, DS8000, ESS SAN Volume Controller, XIV, DS4000, DS5000, PiT Incremental FlashCopy + Metro Mirror Available on: DS6000, DS8000, ESS Copy data command issued Copy is immediately available Primary Prod PPRC-XD Secondary PiTCopies Global Copy (Asynchronous) PiT Incremental FlashCopy FlashCopy WAN Read and write to both source and copy possible Write Read When copy is complete, relationship between source and target ends Time Optional background copy Source Target REMOTE PiT Incremental Copy
51
DS8000 FlashCopy Options Once Source/Target “Logical” relationship established – Both volumes are available for Read/Write. Multiple Relationships - Single Source may have up to 12 Targets. Background Copy Optional NoCopy to Copy – Background Copy Persistent & Incremental Consistent FlashCopy (across volumes in single DS8K or across multiple DS8Ks) Target device may be in any LSS Space Efficient FlashCopy (single FC repository for all target volumes) For Backup typically requires 10-20% of actual space. Remote Pair FlashCopy
52
IBM HyperSwap Technology -> Higher Availability for Parallel SYSPLEX !
Ability to swap enterprise class System z Disk Subsystems in seconds. HyperSwap substitutes Metro Mirror secondary for primary device No operator interaction, Designed to scale to multi-thousands of z/OS volumes Includes volumes with SYSRES, page data sets, catalogs Non-disruptive - applications keep using same device addresses HyperSwap integration with z/OS yielding Higher Availability for z/OS. application Basic HyperSwap (GA 2008) Single site continuous availability function Unplanned failures Planned fail over (testing) Aimed at masking disk failures IBM Disk Subsystems ONLY (ESS 800, DS6000, DS8000) GDPS/PPRC HyperSwap Manager (GA 2006) Single site or multiple sites Continuous availability and/or Entry level DR solution Any Vendors disk subsystem that supports the IBM PPRC Architecture (ex. IBM, EMC, Hitachi (HP & SUN) GDPS/PPRC w/HyperSwap (GA 2002) Full function HyperSwap across multiple Sites for D/R and High Availability. Includes Server, Workload & Network Management across sites in addition to Storage Supports GDPS/MzGM and GDPS/MGM environments. Any vendors disk subsystem that supports the IBM PPRC Architecture. (ex. IBM, EMC, Hitachi (HP & SUN) UCB UCB The HyperSwap function is designed to broaden the continuous availability attributes of z/OS by extending the Parallel Sysplex redundancy to disk subsystems. Planned HyperSwap function provides the ability to: • Transparently switch all primary PPRC disk subsystems with the secondary PPRC disk subsystems for a planned reconfiguration • Perform disk configuration maintenance and planned site maintenance without requiring any applications to be quiesced. Unplanned HyperSwap function contains additional function to transparently switch to use secondary PPRC disk subsystems in the event of unplanned outages of the primary PPRC disk subsystems. Unplanned HyperSwap support allows: • Production systems to remain active during a disk subsystem failure. Disk subsystem failures will no longer constitute a single point of failure for an entire Parallel Sysplex. Basic HyperSwap in z/OS 1.9 is a single site high availability entry level solution managed by TPC-R software shipped with zOS. Basic HyperSwap can scale to all volumes within the SYSPLEX including all system volumes. It is limited to IBM Disk storage subsystems (ESS 800, DS6000 & DS8000) and is available as a PTF on zOS 1.9. P Metro Mirror S
53
DS8000 Global Mirror: Concept Asynchronous long distance copy (Global Copy), i.e., little to no impact to application writes Verify End to End Data Integrity CRC sent & verified w/each changed Block/record. GM detects dropped FCP Frames For ECKD devices track format also sent & verified in Metadata. CG “Marked” at primary, but CG is formed at Target Site, yields continuous BW utilization. Momentarily pause application writes (fraction of millisecond to few milliseconds) Create point in time consistency group across all primary subsystems (in OOS bitmap) New updates saved in Change Recording bitmap Restart application writes and complete write (drain) of point in time consistent data to remote site Stop drain of data from primary (after all consistent data has been copied to secondary) Logically FlashCopy all data (i.e., 2ndary is consistent, now make tertiary look like 2ndary) Restart Global Copy writes from primary Automatic repeat of sequence every few seconds to minutes to hours (selectable and can be immediate) Intended benefit Long distance, no application impact (adjusts to peak workloads automatically), small RPO, remote copy solution for zSeries and Open Systems data, and consistency across multiple subsystems Cascading PPRC is the basic enabler of a long distance d/r solution with no data loss. PPRC-XD requires the creation of consistent data at the remote site meaning that the Recovery Point Objective had to be something other than current. The RPO was determined by how often the user could 'go-to-sync'. Cascading PPRC allows a PPRC secondary volume also be a PPRC primary volume with another relationship to a different PPRC secondary volume. This volume would typically be between a first and the third volume being the intermediate volume in the middle This is a unidirectional replication approach that will typically replicate the data from the first volume via PPRC SYNC to an intermediate volume and from there to a third volume using PPRC XD. Host I/O Tertiary Primary Secondary Remote Site Local (Asynchronous) Global Copy FlashCopy (record, nocopy, persistent, inhibit target write) Global Copy (PPRC-XD) over long distance Could require channel extenders FCP links only
54
Metro/Global Mirror architecture
Server or Servers *** 4 normal application I/Os Global Mirror network Global Mirror FlashCopy asynchronous NOCOPY Metro Mirror large distance 1 2 A B C 3 b a c Extended long busy or Queue full D Metro Mirror network Global Mirror synchronous small distance Metro Mirror write Global Mirror consistency group formation (CG) 1. application to VolA a. write updates to B volumes paused (< 3ms) to create CG 2. VolA to VolB b. CG updates to B volumes drained to C volumes 3. write complete to A c. after all updates drained, FlashCopy changed data from C to D 4. write complete to application volume s Local Site (Site A ) Intermediate Site (Site B ) Remote Site (SiteC )
55
Delivered by IBM Global Services Delivered by IBM Global Services
Integrated solutions TECHNICAL SALES SUPPORT AMERICAS Delivered by IBM Global Services RCMF/PPRC GDPS/PPRC HyperSwap Manager GDPS/XRC RCMF/XRC GDPS/GM GDPS/MGM Delivered by IBM Global Services GDOC Veritas Clusters + IBM MM, GM or EMC SRDF, SRDF/A or VVR AIX/HACMP 5.1 + Metro Mirror Windows GeoDistance MSCS + Metro-Mirror Scripts DS6000/DS Data Replication Data Replication Options: FlashCopy® (FC) (within Box) Metro Mirror (MM/PPRC) (Sync Copy) Global Copy (GC) Async Copy, No Consistency PiT Incremental FC to MM, GC, or GM Primary Global Mirror (GM) Async Copy Global Mirror for zSeries® (zGM) Metro Global Mirror (MGM) zGM + MM/PPRC Multi-Target (MzGM) Managed by: TotalStorage® Productivity Center – Replication Manager (TPC-R) Geographically Dispersed Parallel Sysplex (GDPS) (See integrated solutions on inside flap.) For more information, please contact your local IBM Representative or IBM Business Partner. May 2008 FlashCopy - Point in time copy available on DS8000™, DS6000™, and ESS Features: Multiple relationships - Single Source may have up to 12 Targets. Background Copy optional NoCopy to Copy – Background Copy Persistent and Incremental Consistent FlashCopy Target device may be in any LSS PiT Inc FlashCopy to MM, GC, or GM Primary Metro Mirror - Synchronous mirroring available on DS8000, DS6000, and ESS Designed to provide: No data loss Industry leading replication performance High Availability with GDPS™ HyperSwap™: System z™ and open systems data Ease of use, lower cost VOLUME A B PRIMARY IBM Host Send I/O write (2) Confirm I/O write (3) Req (1) Ack (4) Write Read When copy is complete, relationship between source and target ends Time Optional background copy Source Target Read and write to both source and copy possible SECONDARY
56
Global Mirror - Asynchronous mirroring available on DS8000™, DS6000™, and ESS
z/OS Global Mirror -- Asynchronous mirroring for System z available on DS8000 and ESS 800 Metro / Global Mirror – Three-site synchronous and asynchronous mirroring Available on DS8000 DS6K/DS8K/ESS key design to points: Capability to achieve an RPO of 3-5 seconds with sufficient bandwidth and resources Do not impact production applications when insufficient bandwidth and/or resources are available Scalable; providing consistency across multiple primary and secondary disk subsystems Allow for removal of duplicate writes within a consistency group before sending data to remote site Allow for less than peak bandwidth to be configured by allowing RPO to increase without restriction at peak times Provide consistency between System z and open systems data and between different platforms on open systems. Designed to provide: Designed to provide: Performance, scalability Metro Mirror, Global Mirror Satisfy all 3-site requirements: Fast failover / failback to any site Fast re-establishment of three-site recovery, without production outages Resynchronize any site with incremental changes only* Ease of use, autonomic, self-monitoring Premium performance and scalability Data moved by DFSMS™ System Data Mover (SDM) address space(s) running on z/OS. Supports heterogeneous disk subsystems Unlimited distances Time consistent data at the recovery site RPO within seconds Supports System z™ and System z Linux® data Over 200 installations worldwide 3-Site GDPS®/PPRC HyperSwap and GDPS/XRC “Multi-Target” Supported Metro / zGlobal Mirror Multi-Target (MzGM) - Three-site synchronous and asynchronous mirroring Available on DS8000 Designed to provide: Performance, scalability Metro Mirror, zGlobal Mirror Satisfy all 3-site requirements: HA and CA capability for Metro distance Out of region DR with fast recovery Resynchronize any site with incremental changes only* Ease of use, autonomic, self-monitoring A B C I/O Write Metro Global 'A‘ Primary Native performance Consistent Data FlashCopy REMOTE HOSTS SAN PRIMARY ‘B’ Global Copy Secondary DS8000, ESS Global Mirror System Data Mover System Data Mover Primary Host Secondary Host © International Business Machines Corporation, 2007. IBM, the IBM logo, GDPS, Global Dispersed Parallel Sysplex, HyperSwap, z/VM, z/OS, z.VSE, System z, are trademarks of International Business Machines Corporation in the United States, other countries or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Other company, product and service names may be trademarks or service marks of others. 2 3 1 4 Primary DS8000
57
DS6K/DS8K Copy Services Matrix
GMz10 (XRC) Primary (XRC) Secondary Metro Mirror or Global Copy Primary Metro Mirror or Global Copy Secondary Global Mirror Primary Global Mirror Secondary FlashCopy Source FlashCopy Target Incremental FLC Source Incremental FLC Target Concurrent Copy Source No Yes11 Yes No5 Metro Mirror or Global Copy Primary Yes 1 No6 Yes1 Yes8 Yes9 Yes 3,4 Yes 4 Yes3 Yes 2 Yes4 No7 Device Is May Become
58
DS6K/DS8K Copy Services Matrix Notes
Only in a Metro/Global Copy (supported on ESS) or a Metro/Global Mirror Environment (supported on ESS and DS8000). FlashCopy V2 at LIC and higher on ESS800 (DS6000 and DS8000 utilize FlashCopy V2 by default). You must specify the proper parameter to perform this Metro Mirror primary will go from full duplex to copy pending until all of the flashed data is transmitted to remote Global Mirror primary cannot be a FlashCopy target FlashCopy V2 Multiple Relationship. FlashCopy V2 Data Set FlashCopy (only available for z/OS volumes). The Storage Controller will not enforce this restriction, but it is not recommended. A volume may be converted between the states Global Mirror primary, Metro Mirror primary and Global Copy primary via commands, but it two relations cannot exist at the same time (i.e. multi-target). GMz (XRC) Primary, Global Mirror Secondary, Incremental FlashCopy Source and Incremental FlashCopy Target all use the Change Recording Function. For a particular volume only one of these relationships may exist. Updates to the affected extents will result in the implicit removal of the FlashCopy relationship, if the relationship is not persistent. This relationship must be the FlashCopy relationship associated with Global Mirror – i.e. there may not be a separate Incremental FlashCopy relationship. Global Mirror for zOS (GMz) is supported on ESS and DS8000 In order to ensure Data Consistency, the XRC Journal volumes must also be copied.
59
Reference Resources
60
References “Global Mirror Whitepaper”, V1-3
By Nick Clayton, 13/09/2005 z/OS DFSMS Advanced Copy Services SC , January 2006 DSx000 Command-line Interface User’s Guide DS6000 GC , September 2006 DS8000 SC , November 2006 Device Support Facilities User’s Guide and Reference, Release 17 GC , March 2005 Redbooks or w3.itso.ibm.com Search on “DS6000” & then “DS8000” Many choices available Also search on “copy services” Again, many choices available
61
To COPY or to NOCOPY?…. That is the question!
BACKGROUND NOCOPY is typically the best choice to minimize rank and DA activity within the physical box But…. You must ask why are you making a copy? And…. What type of application workload do I have? For example: Is the copy only going to be used for creating a tape backup? BACKGROUND NOCOPY should be used and the relationship withdrawn after the tape backup is complete Is the copy going to be used for testing or development? NOCOPY again is typically the best choice Will you need a copy of the copy? BACKGROUND COPY must be used so that the target will be withdrawn from its relationship after all of the tracks are copied thereby allowing it to be a source in a new relationship Possibly use NOCOPY to COPY option Is the workload OLTP (NOCOPY typically is the choice) or are there a large number of random writes and are not cache friendly (COPY may be the better choice)
62
References SC26-7916 DS8000 Command-Line Interface User’s Guide
GC DS6000 Command-Line Interface User’s Guide SC DFSMS Advanced Copy Services SG IBM System Storage DS8000 Series: Copy Services in Open Environments SG IBM System Storage DS8000 Series: Copy Services with IBM System z SG IBM System Storage DS6000 Series: Copy Services in Open Environments SG DS6000 Series: Copy Services with IBM System z Servers GC Device Support Facilities User’s Guide and Reference, Release 17
63
References “Global Mirror Whitepaper”, V1-3 By Nick Clayton, 13/09/2005 WP100642 Performance White Paper DS8000/DS6000 Copy Services: Getting Started WP100905 DS8000 Disk Mirroring Licensing - Frequently Asked Questions Redbooks or w3.itso.ibm.com Search on “DS6000” & then “DS8000” Also search on “copy services” IBM System Storage DS8000 Series: Copy Services in Open Environments, SG Redbook, published 29 November 2006 IBM System Storage DS8000 Series: Copy Services with IBM System z, SG Redbook, published 14 December 2006
64
References Technical ITSO Redpapers on Business Continuity Solutions:
Technical information on IBM TotalStorage Business Continuity Solutions: IBM ITSO Redbook: TotalStorage Business Continuity Solutions Guide SG : IBM ITSO Redbook: TotalStorage Business Continuity Solutions Overview SG : Technical ITSO Redpapers on Business Continuity Solutions: REDP4062 TotalStorage Business Continuity Solution Selection Methodology This ITSO Redpaper describes an IT Business Continuity Solution Selection Methodology and how to apply it to your computing environment. There are several scenarios which demonstrate the application of this methodology. REDP4063 Planning for Heterogeneous Platform BC and DR This ITSO Redpaper discusses how to plan for IT Business Continuity in a highly heterogeneous platform server and storage installation. Discussion is included of of IBM storage-based tools that can be useful to provide Business Continuity and Disaster Recovery in this diverse environment. It is a 2005 version of this presentation
65
References Technical ITSO Redpapers on Business Continuity Solutions:
REDP4064 Small and Medium Business Considerations for IT BUsiness Continuity Small and Medium Business (SMB) enterprises have IT Business Continuity needs and concerns similar to large enterprises; yet in other ways, SMB companies have key differences. This redpaper discusses those differences, and describes an overview of IT Business Continuity Solution selection in the SMB business environment. REDP4066 Networking Tutorial for IT Business Continuity Planners Confused by terms such switches, routers, bridges, OC3, DWDM, dark fibre? This Redpaper is intended to enable the IT Business Continuity planner to better understand many commonly used networking concepts, in order to be able to better evaluate and select the appropriate networking components for your IT Business Continuity solution.
66
References IBM Implementation Services for Geographically Dispersed Open Clusters (GDOC) IBM Geographically Dispersed Parallel Sysplex: IBM System Storage Business Continuity Solutions website: IBM Global Services Business Continuity and Recovery Services: IBM Business Continuity and Recovery Services Your local IBM Global Services ITS representative
67
System Storage Enterprise Disk Practices Resources
System Storage Business Continuity Solutions website System Storage Technology Center Storage Education System Storage Interoperation Center System Storage Services Redbooks/Redpapers The IBM TotalStorage DS8000 Series: Concepts and Architecture (SG ) IBM System Storage Business Continuity Solutions Overview (SG ) IBM System Storage DS8000 Series: Copy Services with IBM System z (SG ) IBM System Storage DS8000 Series: Copy Services in Open Environments (SG ) IBM System Storage Solutions Handbook (SG ) White papers IBM Storage Infrastructure for Business Continuity Solution Global Mirror Technical Whitepaper 67
68
References IBM System Storage Business Continuity Solutions website:
IBM Global Services Business Continuity and Recovery Services: IBM Business Continuity and Recovery Services Your local IBM Global Services ITS representative
69
* IBM Confidential documents
Following are the IBM links to presentations on the Enterprise Disk Mirroring Sales Kit on System Sales Web Site IBM Three Site Enterprise Disk Mirroring Executive Summary IBM Three Site Mirroring Competitive Marketing Summary for IBM* IBM Two and Three Site Enterprise Disk Mirroring Overview IBM Three Site Disk Mirroring for Open Systems Presentation Guide IBM Three Site Disk Mirroring for zSeries Presentation Guide IBM z/OS Global Mirror and Global Mirror Positioning Guide IBM Two and Three Site Disk Mirroring Technical Presentation Guide Deep Dive on IBM Global Mirror – from US Storage Symposium 2005 IBM Disk Mirroring Update – from US Storage Symposium 2005 IBM DS8000 DS6000 ESS Disk Mirroring Link Efficiency TCO Studies* IBM Two and Three Site Mirroring Competitive Marketing Guide* * IBM Confidential documents
70
Trademarks CICS* ClearCase DB2* e-business logo FICON* GDPS* HyperSwap
The following are trademarks of the International Business Machines Corporation in the United States and/or other countries. CICS* ClearCase DB2* e-business logo FICON* GDPS* HyperSwap IBM* IBM eServer IBM logo* IMS* MQSeries* On Demand Business logo Parallel Sysplex* Rational* System z9 Tivoli* WebSphere* z/OS* z/VM* zSeries* * Registered trademarks of IBM Corporation The following are trademarks or registered trademarks of other companies. Intel is a trademark or registered trademark of Intel Corporation or its subsidiaries in the United States, other countries or both. Microsoft is a trademark of Microsoft Corporation in the United States, other countries, or both. Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. * All other products may be trademarks or registered trademarks of their respective companies. Notes: Performance is in Internal Throughput Rate (ITR) ratio based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput improvements equivalent to the performance ratios stated here. IBM hardware products are manufactured from new parts, or new and serviceable used parts. Regardless, our warranty terms apply. All customer examples cited or described in this presentation are presented as illustrations of the manner in which some customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics will vary depending on individual customer configurations and conditions. This publication was produced in the United States. IBM may not offer the products, services or features discussed in this document in other countries, and the information may be subject to change without notice. Consult your local IBM business contact for information on the product or services available in your area. All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. Information about non-IBM products is obtained from the manufacturers of those products or their published announcements. IBM has not tested those products and cannot confirm the performance, compatibility, or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. Prices subject to change without notice. Contact your IBM representative or Business Partner for the most current pricing in your geography.
71
Disclaimers Copyright © 2009 by International Business Machines Corporation. No part of this document may be reproduced or transmitted in any form without written permission from IBM Corporation. Product data has been reviewed for accuracy as of the date of initial publication. Product data is subject to change without notice. This information could include technical inaccuracies or typographical errors. IBM may make improvements and/or changes in the product(s) and/or programs(s) at any time without notice. Any statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. References in this document to IBM products, programs, or services does not imply that IBM intends to make such such products, programs or services available in all countries in which IBM operates or does business. Any reference to an IBM Program Product in this document is not intended to state or imply that only that program product may be used. Any functionally equivalent program, that does not infringe IBM’s intellectually property rights, may be used instead. It is the user’s responsibility to evaluate and verify the operation of any on-IBM product, program or service. THE INFORMATION PROVIDED IN THIS DOCUMENT IS DISTRIBUTED "AS IS" WITHOUT ANY WARRANTY, EITHER EXPRESS OR IMPLIED. IBM EXPRESSLY DISCLAIMS ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NONINFRINGEMENT. IBM shall have no responsibility to update this information. IBM products are warranted according to the terms and conditions of the agreements (e.g., IBM Customer Agreement, Statement of Limited Warranty, International Program License Agreement, etc.) under which they are provided. IBM is not responsible for the performance or interoperability of any non-IBM products discussed herein. The provision of the information contained herein is not intended to, and does not, grant any right or license under any IBM patents or copyrights. Inquiries regarding patent or copyright licenses should be made, in writing, to: IBM Director of Licensing IBM Corporation North Castle Drive Armonk, NY U.S.A.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.