Presentation is loading. Please wait.

Presentation is loading. Please wait.

Technology Spotlight on

Similar presentations


Presentation on theme: "Technology Spotlight on"— Presentation transcript:

1 Technology Spotlight on
EMC Symmetrix Remote Data Facility

2 Agenda Enterprise Storage SRDF Introduction Business Challenges
SRDF Configurations SRDF Operational Modes SRDF Operation

3 Enterprise Storage Framework
4. Information Protection—Enterprises Demand Continuous Access to Information Enterprise storage provides reliable, bullet-proof business continuance protection against planned and unplanned outages through a diverse set of powerful solutions which make information continuously available. 5. Information Sharing—Timely Access to Enterprise Information Enterprise storage employs advanced software intelligence to make data formats transparent to users. These software products access and deliver huge volumes of information at very high speeds to users working on different computing platforms—with enterprise storage at the center of their operations. Through information sharing, enterprise storage effectively breaks down the wall between mainframe and open system environments, and creates a bridge between databases on these systems. File transfers Speed existing flow of information Database extracts Speed information flow without impacting CPUs, networks File sharing Bring enterprise attributes to distributed environments 6. Information Management—Manage Growing Volumes Across All Platform Types Through a common information management environment, enterprise storage simplifies tasks. For instance, it allows for seamless backup and restore, and delivers data about user performance requirements for every platform it supports—from a centrally managed point of control. Monitoring/management of storage resources Viewing/changing utilization across the enterprise Allocation of storage resources Enterprise storage as a shared asset Backup/restore The new definition of storage—enterprise storage—must be able to do all of these things; it’s a much broader definition than before. Though you may not be familiar with each of the EMC Enterprise Storage software products identified on this chart, you’re certainly familiar with the requirements. Each EMC software product delivers a primary business impact, which is how we’ve categorized them on this chart, but there’s more to the story. It’s important that you understand they also effect other areas. Our customers are justifying their EMC Enterprise Storage software decision because it delivers beyond a single requirement or capability. EMC Enterprise Storage software products address multiple requirements across the enterprise. Information Sharing Information Sharing Information Protection Information Protection Information Management Information Management ESP ESP SMTF SMTF Data ReachTM Data ReachTM SNFS SNFS Time FinderTM Time FinderTM Time FinderTM SDMS SDMS Symmetrix Manager Symmetrix Manager FDRSOS FDRSOS EDM EDM Internet Services Internet Services SRDF

4 SRDF Introduction

5 What Does SRDF Do? SRDF creates and maintains real time or near real time physically separate copy of data, without using host cycles, in a local or remote EMC Symmetrix that can be directly addressed by a designated Host system even in the event of individual drive or link failures. This is a simple definition of SRDF to allow us to begin our technical discussion on the product. The user chooses if the copies created by SRDF are real time copies or near real time copies based on what SRDF is being used for. The key point here is that SRDF creates the copies that may be used for business continuance, point in time back up, or data movement and propagation without utilizing the host’s processing cycles or the network.

6 SRDF Nomenclature Local Volume Source Volume Target Volume
May be spared, locally mirrored, unmirrored or part of a RAID-S group Source Volume An active application volume containing production data and programs. The Standard Volume may be locally mirrored, unmirrored, part of a RAID-S group, and/or covered by a global spare. Target Volume A volume that contains the copy of a Source Volume Synchronized Volumes The background task performed by Symmetrix to create an identical mirror image of two volumes; as in RAID 1 One volume is always Synchronized to another. RLD, Remote Link Director Symmetrix SRDF Communication feature Configuration Types Unidirectional Bi-directional Uni-directional Primary Operational Modes Synchronous Semi-synchronous Secondary Operational Modes Adaptive Copy Domino Invalid Tracks Other Definitions M1: A Source Volume M2: The RAID 1 mirror of the M1 R1: Source side of Remote Mirror R2: Target side of Remote Mirror Here are some definitions of the terms that we will be using today in our discussion. It may be helpful to hold this page aside fro reference as we go through the seminar.

7 Symmetrix Remote Data Facility
Campus (66Km) Necessity is the mother of invention. In the case of SRDF, necessity was business continuance and disaster recovery capability without sacrificing performance. Since the introduction of SRDF EMC has sold over 2,000 licenses, and many of our customers have discovered new and different ways to use SRDF to accomplish a variety of objectives. SRDF has evolved as a solution beyond disaster recovery and business continuance to point in time back up capability, moving data to remote locations, and loading decision support systems as examples. Additionally, we found many customers to differing objectives and priorities in creating disaster recovery capability - SRDF provides the flexibility they needed to obtain exactly what their individual requirements were. Extended Distance (T1/T3, E1/E3, ATM) Logically synchronized versions of selected volumes Independent of CPU, operating system, application or database Resilient against drive, link and server failures Selectable synchronization behavior

8 Business Challenges

9 The Business Challenge
The Business Problem Data inaccessibility measured in $$$$ Continuous availability for all data becoming the norm Disaster recovery for business receiving renewed focus Symmetrix Remote Data Facility Maintain real-time or near-real-time physically separate copy of selected volumes Use no host CPU resources Operating system independent Continue running through events such as individual drive/link failures Here we some of the driving forces behind SRDF’s enormous popularity. These challenges are not new to any of us, however they are no longer daunting impossibilities - all of these business challenges can be met with success with SRDF.

10 SRDF Applications Business Continuance Disaster Recovery
Act as separate mirror for business and application continuance Disaster Recovery Very rapid data availability with data current to last write Data Center Migration Effect data center move with minimum application outage Workload Management Provide service through scheduled outages such as maintenance Here are some examples of how our current customers have been using SRDF. Many customers use SRDF to achieve multiple objectives - clearly an advantage of an Enterprise wide solution that can be deployed in a heterogeneous environment using a common set of tools and user training.

11 Disaster Recovery with SRDF
Disaster recovery across the enterprise Consolidates heterogeneous platforms into one DR solution Restart applications at recovery site with no data restore required Maximized application availability Test Disaster Recovery process in place Validate completeness of DR plan to minimize “surprises” Return home after disaster recovery with minor disruption Synch and re-synch automated and transparent to users Disaster recovery. The heritage of SRDF. Beyond continuous operation in the event of a failure. An easy to use solution that enables customers to conduct tests in advance of a failure to ensure their success. Easy enough to use and dependable enough for customers to utilize SRDF’s capabilities even in the face of relatively minor failures - those failures that may slow a system down rather than stop it can be averted. We don’t like to see our customers face a failure situation, but one that we are particularly proud of involved a financial institution in Scotland. A lowly network card failed on their primary production system. Their service provider estimated by phone that the total repair could take as long as three days. Not a disaster, there were other network cards that could be used to double up on the busy traffic on the system, and with luck maybe it would be repaired sooner once the Service Technician evaluated the problem. What would you do? Our customer immediately switched to their DR (disaster recovery) CPU and ran from the data stored in a remote Symmetrix, synchronized to the local Symmetrix with SRDF. They had confidence in their DR plan - SRDF made it simple enough to test on a regular basis. Were the users impressed? Hardly, they were never aware of the problem. One more thing about this story. when the network card was replaced (yes it did take three days) our friends could thoroughly test the production system, and switched back from the DR site quickly and without interruption. This is the part that IT professionals have most ignored, and remember most if they have experienced a disaster - switching back. Too many experience the disaster a second time as they switch back to their production system if they have been lucky enough to live through the initial disaster.

12 SRDF Configurations

13 SRDF Uni-Directional Configuration
... ... Optional CPU(s) CPU(s) This configuration shows SRDF being use to protect or move data from the primary site - SITE A to SITE B. Although we are showing the recovery path to remote CPUs, these are optional. RLD 1 RLD 2 Fibre, ATM, or T1/T3 Local Source Target Local Active Path Recovery Path SITE A SITE B

14 SRDF Bi-Directional Configuration
... CPU(s) ... CPU(s) In a campus environment two systems can be used to protect each other by using a bi-directional configuration. A copy of data from SITE A is stored in SITE B, and a copy of the local data from SITE B is stored in SITE A. The following chart shows how we would configure SRDF to accomplish the same effect for remote devices... RLD 1 RLD 2 Fibre Only Local Source Local Target Target Source Active Path Recovery Path SITE A SITE B

15 Dual SRDF Configuration
... CPU(s) ... CPU(s) Think of this as two uni-directional configurations, one going one direction and the other going a different direction. RLD 1 RLD 2 Fibre, ATM, or T1/T3 Local Source Local Target Target Source Active Path Recovery Path SITE A SITE B

16 Campus Solution SITE A SITE B Active Channel
Private Fiber or Common Carrier SRDF links use standard ESCON protocol and must conform to the same limitations as standard ESCON with respect to distance. There can be up to 60KM on single mode links between the Master and Slave if repeaters are used. 9036 9032 9033 9191 9036 9032 9033 9191 SITE A SITE B Active Channel Private or Common Carrier km max, Repeater required every km Recovery Channel

17 Extended Distance Solution
ATM, T3 or E3 Extended distance is accomplished by T1, T3, or ATM. Your environment will determine the right performance / cost alternative. Other Network Device Other Network Device SITE A SITE B Active Channel Leased Carrier - Distance Limited to Carrier Recovery Channel

18 Extended Distance: The Problem
Speed of light (186,000 miles/sec) is Constant! 1ms delay per 125 miles ( x 2 for response) 8ms for send and confirm across a 500 mile trek Single Queue, Multiple Servers (RLDs). For a 500 mile link - I/O from 6 devices queued first 2 devices delayed 8ms each next 2 delayed ~ 16ms each last 2 delayed ~ 24ms each links are idle 87% of the time - more for longer distances You see the problem with standard mode - the speed of light. Because standard mode keeps the links reserved until the acknowledge is received.

19 Extended Distance: Standard and FarPoint
Queue Dev 0 Dev 4 Dev 3 Dev 2 Dev 1 DATA & Cmd RLD RLD STATUS SRDF Standard Mode Here is what FarPoint looks like graphically. Notice that only one block is transferred per logical device. We like this because if there is a problem on the link or device, we can revert to standard mode until the problem is resolved. This is operational reality - think of how a disk subsystems works when a block is being written to a disk drive. A second block must wait in a queue until that disk drive has completed the first operation. One block per link. Link waits for confirming response. Queue RLD RLD Dev 0 Dev 1 Dev 2 Dev 3 SRDF FarPoint Multiple Logical Devices per link. Device waits for confirming Response. Dev 4

20 Extended Distance: FarPoint
Send Multiple -- Confirm Multiple Multiple devices, multiple servers (RLDs) Queued in order received Up to n transactions in the “Pipe” at any time Variable Based on Bandwidth, Distance, and Block size FarPoint allows many I/Os to transfer to the remote Symmetrix on the same Link. We call this “Send Multiple - Confirm Multiple” because SRDF FarPoint fills the pipeline link with as many I/Os as are available or to the capacity of the link. As with standard mode, FarPoint handles multiple devices and Remote Link Directors - but with FarPoint, they don’t have to wait for the Link to become ready if there is one I/O in transit. The queue is unaffected, I/Os are still transferred in the order that they are received. How many I/Os can be in transit along the Link? That depends on the Bandwidth of the Link, Distance, and block size being transferred. Here is an analogy: In standard mode, data movement is similar to a Freight train with a single set of tracks connecting two points. The data can represent how long the train is. Each I/O waits at the station until the Locomotive arrives. The data is loaded onto the rail cars, the train departs, travels to the destination point and then returns with confirmation of receipt and is now ready for another run. During that time, other data shipments must wait until the locomotive returns. Increasing the speed of the locomotive will improve efficiency, but only one train can be on the tracks at any given time, just as a faster communication line improves efficiency. Increasing the amount of freight for each run improves the total amount being transferred just as larger block sizes being transferred increases throughput. In this example the train tracks are always utilized, but work is only accomplished where the locomotive is. In contrast, FarPoint is more like a highway for an assortment of cars trucks and tractor trailers. Moments after a truck has left for its destination, another truck can use the same highway for another delivery. Faster trucks means more freight moved. Additional lanes means more trucks on the highway - much like additional bandwidth for transferring data. The empty trucks obviously return on different lanes with their confirmation of delivery ready to transfer another load. The number of shipments is determined by the speed limit of the highway, the number of lanes on the highway, the length of each truck - corresponding to block size transferred, and finally the distance being traveled. The farther the distance, the more trucks that can be on the road in transit. Now here is a pop quiz: Two I/Os of identical size travel the same distance, one in Standard Mode, one in FarPoint mode. Which I/O is confirmed first? Answer: The FarPoint I/O because it didn’t have to wait in the queue for link to come ready. Of course it would be a tie if these were the only I/Os, but how realistic is that?

21 FarPoint: Impact Higher Level of Throughput Leveling of Response Time
Base Response Times Not Changed Speed of Light Unaffected. More Impact at Higher Distances and Heavier Loads Here are the highlights of the benefits that FarPoint provides. Overall throughput increases - more data can be transferred within a given amount of time. This prevents degraded response times when I/O traffic increases - providing a level of response time that is “level” and consistent. The third bullet is a tongue and cheek reminder that because FarPoint does not alter the speed of light we are not increasing the speed of data transfer, but rather reducing the queuing time. The performance benefit of FarPoint is obviously more dramatic during heavy traffic loads and/or longer distances because these are factors that increase the wait time in the queue for Standard Mode.

22 FarPoint: Special Considerations
Requires 5x64 microcode Default for Extended Distance (T3/E3) Supported Synchronous mode Semi Synchronous Mode Adaptive Copy Mode Not Supported Bi-Directional Mode Error Recovery Flush the Pipe Revert to Standard Mode Here are the details on FarPoint - notice that it cannot be used in the campus oriented Bi-Directional mode. As a reminder, a campus environment with heavy traffic loads will be configured as a dual configuration which is supported by FarPoint.

23 SRDF Operational Modes

24 SRDF Modes of Operation
Synchronous Semi-Synchronous Adaptive Copy Disk Mode Write Pending Mode SRDF Processes I/O to Remote Symmetrix in Serialized Queues There are three operational modes of SRDF, the first two are used for disaster recovery and the third, Adaptive copy is used for moving data. For disaster recovery, Synchronous and Semi-Synchronous modes are used. The two options provide different performance and data integrity options that are selected by the user based on their requirements. They both process I/Os from a serialized queue to maintain the data integrity necessary in a disaster recovery environment AT ALL TIMES. The serialized queue protects against writing I/Os to a failed device or link and provides a methodology to determine what data has been copied to a remote site and what has not in the event of a failure of any kind. A very different use of SRDF is for data movement. Although we are still concerned with data integrity, the purpose is to move data quickly and efficiently while maintaining a minimal impact to the production system. Safeguards are in place to maintain data integrity, but in contrast to Synchronous and Semi-Synchronous modes, we are only concerned that we have data integrity at the end of the data movement process. We’ll start with Synchronous and Semi-Synchronous modes, but first lets look at the serialized queue... SRDF Updates I/O to Remote Symmetrix Periodically Based on Time Slice Setting

25 SRDF Serialized Queue To RLD’s
Queue of Updates from Source Symmetrix to Remote Symmetrix FIFO Queue : First-In-First-Out Maximum of one I/O per Volume in the Queue at any Time Volume 16 Volume 21 Volume 5 Volume 62 Volume 3 Volume 10 This is a simple concept, but worth looking at. As an I/O moves from the Host processor through say, a SCSI adapter, to the local Symmetrix, the I/O is placed at the bottom of the queue. The I/O moves through the queue while maintaining its position, in other words never falling behind or jumping ahead of any other I/Os in the queue. We call this FIFO. When the I/O reaches the top of the queue, it is sent via the Remote Link Director to the remote Symmetrix. Notice that, just as in a normal local Storage Subsystem, there is only one I/O per volume that is active. Additional Text to address questions on the one I/O per volume rule: Think of a simple system with a host and a storage subsystem containing 64 disk drives. When the host sends an update to the storage subsystem for drive 16, the host will not post additional updates to that volume until it receives confirmation that the first, or pending I/O, is complete and acknowledged by the storage subsystem. If the host does have an second I/O for drive 16, it understands when the storage subsystem returns with a disk busy message. The host keeps the second I/O update for drive 16 in its queue until the storage subsystem returns the acknowledgment for the pending update and shows drive 16 as being ready. If there is a problem with drive 16, the acknowledgment is not returned and the host does not sent more data to a failed drive. We simply continue this same procedure in an SRDF environment used for disaster recovery - in the serialized queue. This enables us to protect against the host proceeding as if nothing is wrong if data is written to a local volume but cannot be written to the remote volume for some reason - comm links are down, remote system is not available, etc. From SCSI Adapters, ESCON Adapters, Fibre Adapters or Channel Adapters

26 SRDF Synchronous Mode 1. Local Symmetrix receives a write from
the Host 3. Receipt acknowledged by Remote Symmetrix 4. Ending status presented to Host Here we see an easy to understand flow of data, and the how the serialized queue can be applied. In Synchronous mode, the Host is not presented the ending status until data is in both the local Symmetrix and the remote Symmetrix. If you want to be absolutely sure that the data in the remote device is identical to the local devise at any instant in time - Synchronous mode is your configuration. This absolute data integrity comes at a cost. We call this I/O Elongation. Performance will obviously be slower than an update sent to write cache in a local device alone without disaster recovery, however, this may not be discernible unless I/O rates are high and/or distances are great. Lets take a closer look at what the I/O is doing... 2. Local Symmetrix transmits data to the Remote Symmetrix Local Source Target Local

27 Source Data Always Equals Target Data
SRDF Synchronous Mode Data integrity of all copies is maintained Local and Remote Symmetrix maintain a synchronized copy of data I/O is Serialized and placed on Symmetrix System Wide FIFO Queue “I/O Elongation” - Duration of time for each I/O is increased due to serial activity Source Target Source Data Always Equals Target Data Here are the highlights of Synchronous mode. PRIORITIES Data Integrity Disaster Recovery

28 SRDF Semi-Synchronous Mode
1. Local Symmetrix receives a write from the Host 4. Receipt acknowledged by Remote Symmetrix 2. Ending status presented to Host In contrast to Synchronous mode, Semi-Synchronous mode eliminates I/O Elongation by returning the end status to the host much the same as a local device without a remote device. Data is moved to the remoter device after the local host is presented the end status. At the instant in time between step 2 and step 3, the two copies of data are not in synch. Lets look at this in greater detail... 3. Local Symmetrix transmits data to the Remote Symmetrix Local Source Target Local

29 SRDF Semi-Synchronous Mode
~ I/O to Remote Symmetrix is done in parallel Reads are executed even while writes are in transit to Remote Symmetrix Additional writes wait until original write is acknowledged Local and Remote Symmetrix maintain a nearly synchronized copy of data I/O is Serialized and placed on Symmetrix System Wide FIFO Queue “I/O Elongation” is eliminated Source Target Source Data Almost Equals Target Data Out of Synch by 1 I/O per Volume at most Here are the highlights of Semi-Synchronous mode. PRIORITIES Performance Data Integrity Disaster Recovery

30 SRDF Adaptive Copy Mode
1. Local Symmetrix receives writes from the Host 2. Ending status presented to Host 5. Receipt acknowledged by Remote Symmetrix Here is Adaptive Copy. Remember that the purpose of Adaptive copy is not for disaster recovery capability, but for moving data. We don’t need the local and remote copies to be synchronized (identical) during the movement - just identical when the move is completed. Also we no longer use the Serialized Queue to manage the transfer of I/Os to the remote device. Steps 1 and 2 are just like Semi-Synchronous mode, the host is provided with ending status immediately after the local device receives the update. With Adaptive Copy mode, the update is flagged for a future update. The update is time sliced into the Symmetrix job queue. When the acknowledgment is received from the remote device by the local device, the flag is reset. 4. Local Symmetrix transmits data flagged as updated based on Time Slice to the Remote Symmetrix Local Source Target Local 3. SRDF Flags the Record for later Update to Remote

31 SRDF Queue of Adaptive Copy Updates
To RLD’s Updates are Time Sliced into Queue Default Setting is 1 in 10 Every Tenth I/O is an Update to the Remote Symmetrix Setting can be changed at any time Volume 16 Volume 5 Volume 19 here is what our queue looks like with adaptive copy. Updates to the local device are in place and are serialized and handled as a FIFO queue - these are I/Os destined to the local device. The updates to the remote device are time sliced into the queue and when they reach the top they are sent via the Remote Link Directors to the remote device. The frequency of updates can be changed at any time to adjust to the I/O levels. Volume 3 Volume 10 Volume 13 Volume 14 Volume 7 Volume 8 Time Slice Update Volume 17 From SCSI Adapters, ESCON Adapters, Fibre Adapters or Channel Adapters

32 SRDF Adaptive Copy Mode
I/O to Remote Symmetrix is done in parallel Reads and Writes are executed even while updates are pending or in transit to Remote Symmetrix No Serialization of I/O data to Remote Symmetrix by Arrival Sequence Host I/O placed on FIFO Queue periodically as determined by AD-COPY RATE Data movement occurs with minimal impact on Local Host and Local Symmetrix Adaptive Copy Configurations Disk Mode Write Pending Mode Source Target Source Data does not Equals Target Data Possibly Out of Synch by ‘N’ tracks based on Symmetrix Activity Here are the highlights for adaptive copy. Notice that Adaptive Copy has two modes of operation - Disk Mode and Write Pending Mode. PRIORITIES Moving Data On Line Backup at Remote Site

33 SRDF Domino Source Local Target Local Domino Configuration Sets Volumes to NOT READY State if Any One of the Following Occurs: Source Unavailable Links Unavailable Target Unavailable Links Only Option Sets Volumes to NOT READY Only if Links are Unavailable There are additional configuration options for SRDF that let you decide what you want the system to do ion the event of a failure. SRDF Domino is a setting that may be used to essentially stop the entire system if a problem exists with either the source device, the links, or the target device. This is designed for those that need data integrity first and continuous processing second. Domino prevents any activity if the any part of the SRDF system is malfunctioning. Because the use of global spares can reduce the restore time for a pair of mirrored drives (source and target), there is a links only option that only halts the system if the links are down. We’ll look at how global spares operate in an SRDF configuration in a few moments. Links Source Target

34 SRDF Invalid Tracks Notification of Invalid Tracks Provided
Source Local Target Local Notification of Invalid Tracks Provided Stops Target Volumes from Becoming Ready When Multiple Links Fail The SRDF Invalid Tracks setting provides notification of invalid tracks (meaning data is out of synch) if there is a failure in any part of the system unless multiple links have failed. This might be an option selected if continuous operation is of higher importance than data integrity. Links Source Target

35 SRDF Operation

36 Error Reporting and Diagnostics
Message reporting to Host/Server console Auto Call on abnormal conditions in HDA, link, subsystem Remote support facility diagnostics No one plans for a disaster, and if they occur you need to act quickly. SRDF improves your reaction by reporting messages to the console and you can utilize the auto call feature to call or page you in the event of an abnormal condition. In addition SRDF is fully supported by the Remote Support facility that helped Symmetrix set a new bar for excellence.

37 Loss of Drive at Primary Site
Source Local Target Local Primary Host Remote Host Lets look at what happens in the event of a drive being lost at the primary site - we’ll assume that Domino is not on and no Dynamic spares are configured. Remember that Domino will force the Symmetrix to return a Not Ready to the host. Links Primary Symmetrix Remote Symmetrix Source Target

38 Loss of Drive at Site A Application continues to run on Host/Server at Site A Read/Write operation on affected drive go to Target volume at Site B Exposure if Target volume lost (no local mirroring) MESSAGE: with address of failed device Repair drive at Site A Command to resync: from Target to Source MESSAGE: when resync. starts MESSAGE: when resync. complete Operation continues as outlined here.

39 Loss of Primary Site Source Local Target Local Primary Host
Remote Host It has been said that a building is a single point of failure! Lets look at what happens in the event of a failure at the entire primary site. Links Primary Symmetrix Remote Symmetrix Source Target

40 Loss of Primary Site Single command makes target drives accessible to recovery Host/Server CPU Command can be issued from: Service processor console Remote support center Local host using host component Applications resumed within minutes at most recent status The recovery host can access the targets drives and can then proceed as a new local system exactly where the failed host left off if you were running Synchronous mode or nearly where the old host left off if you were running in Semi-Synchronous mode.

41 SRDF Host Component Runs as MVS subsystem
Optional, chargeable, licensed per CPU Complements SRDF base system Initiated via started task Uses SAI to retrieve information from Symmetrix subsystem Provides status and configuration control Extends automated operations to contingency planning Basically there are two different methods of running SRDF depending on your environment. Each have many options, so we’ll cover this only as an overview. In a Mainframe environment, a separate product called SRDF Host Component is run in an MVS subsystem and is used to control, configure, and manage the functions of SRDF. SRDF Host Component is not mandatory, but it is used by the vast majority of Mainframe customers. It is an optional product because in rare configurations and applications, customers wish to omit it. An example might be a stable storage environment where SRDF is utilized in Synchronous mode with Domino to simply maintain a remote copy of data at all cost without regard to performance or system availability. Rather than explain the functions twice we’ll first introduce Symmetrix Manager for Symmetrix Systems which is used to control SRDF in the Open Systems environment.

42 Enterprise Storage Checklist
MEDIA PLATFORM-SPECIFIC CONTROLLERS MAINFRAME OPEN NETWORK DATABASE Enterprise connectivity Cascadable Information-centric Information protection Information sharing Information management Business Impact Operational Impact Financial Impact The enterprise storage checklist—we’ve seen this slide before. Media, controllers and enterprise storage. If you look at where the impacts come from, Financial Impact comes from the media—heat, light, electricity, floor space—from the media, or, the hardware itself. That’s what makes a financial impact. Operational Impact comes from the controllers. Business Impact comes from the ability to put it all together into enterprise storage. The only way to get all three is to make an enterprise storage decision. Financial impact is HARDWARE. Operational and business impact require SOFTWARE. Enterprise storage: it’s enterprise connectivity, it’s cascadable from environment to environment, and it’s information-centric—one way of looking at information protection, one way of looking at information sharing and one way of looking at information management. Enterprise storage: not a story, not a brochure, but something that can be implemented today at EMC. A point decision in an enterprise world is very, very costly.


Download ppt "Technology Spotlight on"

Similar presentations


Ads by Google