Presentation is loading. Please wait.

Presentation is loading. Please wait.

7 Deadly Sins of Data Protection

Similar presentations


Presentation on theme: "7 Deadly Sins of Data Protection"— Presentation transcript:

1 7 Deadly Sins of Data Protection
Rick Glynn Data Protection Sales

2 Agenda 7 Seven Deadly Sins of Data Protection
Tips for Avoiding the 7 Sins Backup & Recovery Methods You Can Use How Dell Can Help

3 “Whatever can go wrong, will go wrong.”
Major Edward A. Murphy was an American aerospace engineer who work on safety-critical systems. He was involved in the high-speed rocket sled experiments for the purpose of testing the human tolerance of G-forces during rapid deceleration. Murphy had developed sensors that were capable of measuring the exact amount of G-force applied when the rocket sled came to a sudden stop. The first test after Murphy hooked up his sensors to the harness produced a reading of zero – all of the sensors had been connected incorrectly. For each sensor, there were two ways of connecting them, and each one was installed the wrong way. [click to advance animation] When Murphy discovered the mistake, he said of the technician, who was blamed for the foul-up, “If there are two ways to do something, and one of those ways will result in disaster, he’ll do it that way.” And this evolved to Murphy’s Law, “whatever can go wrong, will go wrong.” In February of 2004, a large US bank lost a single magnetic tape with information on roughly 120,000 customers while it was being shipped by truck from a data management center in Singapore. The tape held names, addresses, account numbers and balances. It was never found. In early May of 2005, the same large US bank lost an entire box of tapes from its financial division in transit by one of the largest privately owned shipping companies. Although the tapes were encrypted and were not known to be accessed, the news of their loss created a media frenzy. Four million customer files where compromised. The large US bank that lost their tapes twice has stated that they will begin sending backups electronically to a secure offsite location. Corporate data is one of the most prized assets of a company. Companies do everything they can to protect the integrity of their data, from maintaining real-time remote backups to long-term offsite storage. Unfortunately, the press is replete with horror stories of companies that have lost their data for long periods of time or forever.

4 IT is constantly changing
Data is Growing The rate of data growth continues to be relentless—ranging from 40% to 60% annually1 Organizations are struggling to back up all their data with the assigned backup window, and are missing their recovery SLAs More Workloads Are Virtualized By 2012, 50% of all installed workloads were running in a virtual environment2 80% of organizations view virtual backup as a top IT challenge3 IT Budgets Are Tight Storage budgets will grow a mere 1.5% this year4 Organizations are looking for ways to cut storage costs maintenance fees IT is constantly changing, and from the conversations that we’ve had, organizations out there are struggling to keep up… Data growth. The rate of data growth continues to be relentless.  Ranging from 40% to 60% annually in most firms and 800% growth anticipated over the next five years.  And 80% of the data growth will be unstructured data and Big Data will grow 20%.    Growth of virtualization. According to Gartner, 50% of all installed workloads are now running as VMs. 80% of respondents to an ESG study indicated that virtual backup was a top IT challenge, with 80% saying that it was one of their top challenges.  Yet… Budgets will rise 1.5% over last year's levels. A modest increase – keeping pace with inflation. As a result, IT managers are looking to simplify their backup and recovery operations. In this presentation, I’ll expose the pitfalls of 1) ESG, Trends in Data Protection Modernization, August 2012 2) Gartner, Magic Quadrant for x86 Server Virtualization, 2012 3) Network World, Virtual Backup Challenges Enterprise IT, Feb 2012 4) Storage magazine/SearchStorage Purchasing Intentions survey. Storage 2013

5 Deadly Sin #1: Focus on backup, not recovery

6 Instead, shift the focus onto recovery
Recovery is all that matters. 1 in 5 recovery jobs fail to meet their recovery SLAs1 Know for sure that your backup file is completely recoverable. According to ESG, 1 in 5 recovery jobs are completed within their prescribed RTO/RPO SLAs. The result of a survey conducted by ESG in 2012 and published in “The Modernization of Data Protection” report. Use automated recovery verification technology. Source: ESG “The Modernization of Data Protection” 2012

7 Deadly Sin #2: Treat all data equally.

8 Instead, classify data and applications
Not all data has the same criticality and frequency of change Static Business Vital Mission Critical Does not change over time Is vital to the daily operations of the business If lost or unavailable or unavailable—for even short periods of time— damage will occur

9 Deadly Sin #3: Fail to understand your organization’s tolerance for data loss and downtime.

10 Instead, solicit feedback from cross-functional groups outside of IT
Align recovery objectives to your organization’s business goals Recovery Time Objective (RTO) The tolerable amount of time elapsed between a loss or disaster and the restoration of business operations. It is the time required to physically recover the data or application and have it ready for use Recovery Point Objective (RPO) The point in time since the last backup For example, if you recover a file that was backed up yesterday then your recovery point is one day Critical questions to consider: • How quickly do you need the data or the application itself restored if it is lost or corrupted? • What is the impact of losing the most recent data? RTO: Managing the recovery time of data that is mission-critical to your business is clearly a requirement. You want your RTO to be as short as is fiscally sound for your business. RPO: Managing the recovery point of data that changes very frequently is clearly a requirement as well. You want it as close to current as is fiscally sound for your business. Again, be sure to solicit feedback from cross-functional groups outside of IT to thoroughly understand the implications of downtime and data loss.

11 Deadly Sin #4: Believe one approach fits all.

12 Instead, think tiered recovery
Apply the right approaches to meet your requirements Classifying Your Data/Applications Static Business Vital Mission Critical Data Protection Requirements RTO < 72 Hours RPO < 1 Day Strict regulations RTO 6-24 Hours RPO 2-12 Hours Some regulations RTO < 5 Minutes RPO < 5 Minutes Limited regulations Data Protection Approaches Back up to tape Archive data Fast recovery Disk-based backup Backup to tape Bare metal recovery Fastest recovery Disk-based backup Bare metal recovery

13 Deadly Sin #5: Only store one copy of your backup data—onsite.

14 Instead, establish an offsite DR strategy
Will you be able to meet your recovery SLAs in the event of a site disaster? How will you get your data offsite? Tape Replication Clustering Where should you send your data? Cold DR Site Warm DR Site Hot DR Site It’s imperative that you send data offsite to mitigate the risk of downtime and data loss during a site outage. Use data classification and recovery objectives to determine the appropriate method and storage facility. There are many technologies available that help achieve the movement of data to alternate locations. If we know what data we need to move and have settled on the RTO and RPO requirements, you should end up with a quantity and time requirement to move that data. There are many products that will enable data replication, some at a block level on disk-based SANs and some at the host and application levels. These replication solutions are all well and good, but as with any replication technology, the old adage applies, “garbage in, garbage out”. Replication technologies don’t always give the ability to catch issues around data corruption or logical data deletion. Considering an ‘out of band’ solution to protect, replicate and recover the changing datasets allows for greater flexibility options when the time comes to recover the data when you really need to.

15 Deadly Sin #6: Store too much backup data, for far too long.

16 Instead, optimize data retention
Reduce costs and boost performance Employ deduplication to reduce backup storage footprint. Save on costs by employing D2D2T, archiving older data to less expensive storage IT organizations that fail to optimize data retention and the number of data copies will suffer continuously increasing storage costs and increase their risk. Recommendations: Employ deduplication to reduce backup storage footprint and see storage savings from 90 to 95%. Tape is not dead—yet. According to Gartner, Through 2015, disk-to-disk-to-tape (D2D2T) backup will remain the predominant strategy for large enterprises. D2D2T, while a marketing slogan for years, is now a reality for most products, and is deployed by many companies.

17 Deadly Sin #7: Think you’re done after you test your plan.

18 Test… Test Again… And Test Yet Again
Instead, continually test and update your plan. Test… Test Again… And Test Yet Again Just as important as planning and implementation is testing that your plan works. Make sure you build in adequate resources that allow you to regularly test both your disaster and recovery plans. This is an essential part of a DR solution, if it’s done properly. Why not regularly switch between your production site and your DR site? This will soon find any holes in your plans! Don’t forget the smaller areas of disaster either; always test your data protection with data restores. Make sure you can restore good consistent datasets across all the different classifications of data you have—static, and business vital and mission critical—and don’t forget to validate the data. Once restored, use the data: open it in its respective application and ensure it’s usable again.

19 Bare Metal Recovery (BMR)
Recap: Avoid the 7 deadly sins Bare Metal Recovery (BMR) Focus on recovery Classify your data and applications Solicit feedback from cross-functional groups outside of IT Think tiered recovery Establish an offsite DR strategy Optimize data retention Continually test and update your data protection plan

20 Backup and recovery methods available

21 Traditional backup and recovery
The “Tried & True” Backs up data stored on application, database, and file server data Schedule backups and create policies Run full, incremental, and differential backups Back up to tape or disk Typically uses server/client architecture Traditional File/Folder back up data stored in applications and databases, as well as file server data. The user schedules backups and create policies. Typically, they can decide whether to run a full, incremental, or differential backup. More on that later… Traditional file/folder backup solutions allow the user to send backup data to either disk or tape. The target storage they choose will depend on their infrastructure, recovery objectives, and retention requirements. These types of solutions typically use a server/client architecture. The master backup server is the brains of the solution. Agents are installed on all systems you need to protect. At times, plug-ins are used for specific application/database protection. There will be some sort of user interface (Windows or Web-based) to perform/schedule backups, and restores. And finally, target storage, which accepts the backup data. This could be disk or tape… or a deduplication appliance.

22 Continuous data protection (CDP)
Ideal for “mission-critical” application data Continuously captures all changes on the protected server Eliminates backup windows Super granular recovery points—restore to practically any point in time Fast recovery of data Excellent if you: Can’t afford prolonged downtime Can’t afford to lose mission-critical data High-value data that is considered mission-critical to your business will likely require a level of real-time protection, local data protection copies and the creation of disaster recovery copies. Continuous Data Protection (CDP) technologies provide your business with the maximum RTO and RPO benefits. Best-of-breed CDP solutions will allow you to recover your data back to any point in time, down to the second, and some even work to provide a high-availability solution for your applications to allow for failover of the application to bring them back up within seconds.

23 Bare Metal Recovery (BMR)
Get entire systems back up and running—fast Backs up not only the data, but also the operating system and configuration settings Enables users to quickly restore an entire server (from its bare metal state) Look for solutions offering recovery to similar hardware, dissimilar hardware or virtual machine Bare Metal Recovery is a data recovery technique that quickly gets your servers up and running again after a failed server – even if the environment has no functioning operating system. With a BMR solution, you can quickly rebuild the server—including its operating system, network and system settings, application binaries, disk partitions, and data. This will help you meet aggressive recovery time objectives and SLAs because automation eliminates much of the manual intervention and guesswork. Many BMR solutions will recover servers to similar hardware, dissimilar hardware or virtual machine

24 Bare Metal Recovery (BMR)
Replication Bare Metal Recovery (BMR) Minimize network traffic using WAN-optimization techniques Gain redundancy by sending a copy of data from one source to a target Used for improved reliability, fault-tolerance and/or ensured accessibility Replication over the WAN ensures disaster recovery Different from “backup” because replicas are frequently updated and quickly lose any historical state With replication, you can gain redundancy by sending secondary copies of your data from one location to another—either on a real-time or near real-time basis. This allows you to improve reliability and accessibility. Replication over the WAN is common in disaster recovery. It’s different than backup because replicas are frequently updated and (without snapshots) quickly lose any historical state.

25 Bare Metal Recovery (BMR)
Data deduplication Bare Metal Recovery (BMR) Reduce your backup storage footprint by 90-95% The process of examining a data set or byte stream at the sub-file level and storing and/or sending only unique data. Duplicate data segments are replaced with a pointer to the first occurrence of the data First Full Backup Daily Backup Daily Backup Second Full Backup Data deduplication is one of the primary optimization technologies in use today by disk-based data protection vendors. The purpose of deduplication is to reduce the overall size of backup data (or backup data “footprints”) needing to be transmitted or stored to a secondary disk target. Deduplication is the process of examining a data set or byte stream at the sub-file level and storing and/or sending only unique data. Duplicate data removed or omitted is often replaced by some type of pointer to the original, remaining file or data block.

26 Save on storage using deduplication
Backup and replication aggregate throughput with deduplication (more is better) Over 90% storage savings using deduplication Without dedupe We tested deduplication using a simulated backup cycle with 21 full backups with a change rate of 5-10%. That’s equivalent to just over 5 months of data retention. The first full backup consumed 20GB of storage. Using traditional storage, each subsequent backup must store all of that data with changes; as a result, after 21 full backups, that dataset ballooned to 503GB of capacity (grey section of chart). Using NetVault w/ the DR4100 appliance, only unique changes are stored after the initial full. So after 21 weeks, the total storage consumed is only 42GB, a 91% reduction in capacity. With dedupe

27 Increase Backup Performance
Backup and replication aggregate throughput with deduplication (more is better) Using source-side dedupe and WAN-optimized replication 7.5 TB/Hr 7.5 TB/Hr Using target- side dedupe 3.9 TB/Hr We wanted to test the aggregate throughput performance using RDA for backup and replication tasks. Backup with target-side deduplication Backup with source-side deduplication Optimized replication between DR4100 targets In this test, we used NetVault to back up 8 Linux clients over a 10-gigabit Ethernet network. Each client had ~128GB of unique data, for a total of 1024GBs. For the initial full backup, all 8 clients were backed up in parallel. After that, 10% of the data was modified, and all 8 clients were backed up again. Results: For each 128GB backup with 10% changed data, target-side deduplication resulted in an aggregate throughput of 3.9TB/hour with eight simultaneous jobs. For each 128GB backup with 10% changed data, source-side deduplication resulted in an aggregate throughput of 7.5TB/hour with eight simultaneous jobs. Optimized replication resulted in aggregate throughput of 7.5TB/hour with eight simultaneous jobs. Source-side deduplication demonstrated a 92% improvement in aggregate throughput over target side deduplication.

28 Shrink backup and replication windows
Backup and replication job duration with deduplication (duration in minutes, less is better) Using target- side dedupe 15.75 min. 8.2 min. Using source-side dedupe and WAN-optimized replication Another key metric for evaluating performance is: How long it takes each backup or replication job to run? As this graph demonstrates, for target-side deduplicated backup the average job duration was 15.4 minutes, while for source-side deduplicated backup and optimized replication the average job duration was 8 minutes. 8.2 min.

29 How Dell Can Help

30 5 mins 15 min Dell Backup & Recovery Scale Reduce data loss
Improve recovery times Protect large environments Recover from system failure in 15 min or less Scale to protect data on 100s and 1000s servers Generate recovery points every 5 mins DR4100 Protect continuously, move anywhere, recover everything. Protect large volumes of data across a wide range of platforms and applications. Optimize storage and replication Back up and restore VMs at same time – without limits.

31 And introducing Rapid Data Access (RDA)
What is it? RDA is a plug-in for the DR family of appliances Takes advantage of NetVault Backup’s API for disk-based storage appliances Enables tight integration between NetVault & DR4x00 NetVault retains end-to-end control of all the backup/restore tasks DR4x00 appliance has control over the storage management Storage Target Backup Server Source Server Dell DR System Software Release 2.1 RDA Plug-in (Or) RDA Plug-in (Either) NVBU Storage API NVBU Client Software NVBU Server Software (v9.2) NVBU Application Plug-in

32 NetVault aware WAN-optimized replication
How RDA works NetVault now seamlessly integrates with the Dell DR4100 Application Servers Dramatically shrink backup windows and improve restoration using RDA Backup rates are up to 275% faster—with ingest rates of up to 7.5TB per hour Backup storage footprint was reduced by as much as 93% Network utilization reduced by as much as 95% Automation of backup to tape and D2D2T workflows NetVault Client RDA Plug-In RDA (Or) RDA (Either) RDA WAN RDA plugin NetVault aware WAN-optimized replication NetVault Server Dell DR4100 Dell DR4100

33 Protected by Dell Increasingly customers turn to Dell for help with their data protection needs 82,000 Customers globally from small business to Fortune 500 60% of NetVault customers spend 2hr or less monitoring backup per week 30 of the worlds top 40 banks Trusted by #6 in market share after just 18 months 400PB Data protected by one of the Top 3 internet search providers 3,000 new customers added each quarter

34 Deduplication & Compression
The Dell Data Protection Portfolio Everything. Every time. On time. NetVault Protect large volumes of data across a wide range of platforms and applications. AppAssure Protect continuously. Move anywhere. Restore in little as seconds. vRanger Backup & restore. VMs at same time – without limits. DL4000 & DL Backup appliance powered by AppAssure Backup and DR Suite Complex Virtual Mission Critical @ ! DR4100, DR & DR2000v appliance Deduplication & Compression

35 Dell AppAssure Protect continuously, move anywhere, & recover everything in as little as seconds every time. Recover from complete system failure in 15 minutes or less. Protect Virtual or physical; Up to 288 snapshots/day; Bootable virtual standby; Failover & failback Move WAN-optimized replication; Built-in replication workflows; Customizable Recover Physical, virtual or cloud; Instantaneous & granular restore; Validate nightly; track & monitor

36 Fast, scalable, simple-to-use VMware Data Protection
Dell vRanger Fast, scalable, simple-to-use VMware Data Protection Backup High-speed, agentless backup of VMware virtual infrastructures and physical Windows servers Replicate Cost-effective data replication of key VMs and clusters provides failover/failback between primary and DR sites Recover Catalog search and fast restore for VMs, physical servers, and even individual files Confidential

37 Dell NetVault Backup Enterprise backup and recovery that’s easy to use. Protect Reliably protect large volumes of data across diverse IT environments. Store Target disk- or tape-based devices, and deduplicate data to save on storage Scale Accommodate for grow with an easy-to-deploy, easy-to-manage modular architecture

38 Dell DR4100 & DL4000 Turn big data into little data
DR4100 Deduplication & compression appliance Reduce Backup Storage Powerful deduplication to reduce your backup storage footprint by up to 15:1 Minimize Network Traffic Inline and source-side deduplication to minimize network traffic Streamline Disaster Recovery WAN-optimized replication to reduce network traffic and improve recovery times


Download ppt "7 Deadly Sins of Data Protection"

Similar presentations


Ads by Google