Download presentation
Presentation is loading. Please wait.
1
Enterprise Class Virtual Tape Libraries
Presenter Name Job title Date
2
Designed to Solve Real World Problems
Exponential data growth Static or shrinking data backup windows Stringent regulatory requirements for data protection and retention Challenging internal service level requirements Spiraling costs of IT resources, admin labor, capital expenses Limits in data center power and space
3
Grid Architecture Scale Performance by Adding SRE® Nodes
Independent processing nodes Two Fibre Channel ports each Write data at disk speed Stack up to 16 nodes with linear performance Max throughput of 34.5 TB/Hour
4
Scale Capacity by Adding Disk RAID sets
Grid Architecture Scale Performance by Adding SRE® Nodes Independent processing nodes Two Fibre Channel ports each Write data at disk speed Stack up to 16 nodes with linear performance Max throughput of 34.5 TB/Hour Function as a unit Automatically load balances Backup and restore data through any port Scale Capacity by Adding Disk RAID sets Each disk array is a RAID set Single appliance scales to 1.2 Petabyte Any node can write to the arrays - access any data from anywhere Dynamic disk file system (DFS) eliminates fragmentation No hot spots or fragmentation
5
3 Dimensional Scaling Flexibility
Grid Architecture 3 Dimensional Scaling Flexibility Change your I/O performance and storage space over time Increase your data handling throughput at any time by adding more nodes Increase your storage capacity by adding new disk trays at any time Increase both on the fly up to 34.5 TB/Hour and 1.2 Petabytes in a single appliance without disrupting the system System Configuration I/O Performance Storage
6
Grid Architecture Handle Data Growth Without Over-Provisioning or Forklift Upgrades
Scale capacity and performance independently Capacity scales from 7.5 TB to 2 PB, Performance Scales from X to 17 TB/hour Add SRE nodes for more performance Scale from 1 to 16 nodes at 300 MB/sec. each All nodes share the same storage, & present one or multiple libraries Each node is an independent processing unit Add capacity by adding disk space Capacity is virtualized across multiple RAID controllers
7
Least burdened node handles the request leaving other nodes free for other requests
Each port can handle any inbound request, regardless of where the data is stored Avoiding bottlenecks ― each node can get data from any storage array
8
Simple Management Reduce Administration Costs, Eliminate Disruption
Preconfigured appliance design installs in hours Automation of all disk subsystem management Capacity allocation and provisioning Load balancing Automatic System monitoring Call home feature automatically sends notification to SEPATON Support and system administrators Predictive monitoring of critical components warns of potential failures before they happen Manage 2 PB of data from one console
9
How Nodes Party on During Upgrade
Seamless Implementation Reduce Administration Costs, Eliminate Disruption How Nodes Party on During Upgrade Bullets and such tell the story Storage and gateway nodes both play nice Admin console shows new storage immediately Files remain accessible throughout the process
10
Software Add the latest features without forklift upgrades
The ContentAware™ technology Built-in intelligence captures meta-data about backup file content Enables advanced deduplication and search capabilities Add advanced features as fully integrated software applications DeltaStor® data deduplication software Site2™ remote replication software Search
11
Scale Capacity by Adding Disk RAID sets
Grid Architecture Scale Performance by Adding SRE® Nodes Independent processing nodes Two Fibre Channel ports each Write data at disk speed Stack up to 16 nodes with linear performance Max throughput of 34.5 TB/Hour Function as a unit Automatically load balances Backup and restore data through any port Scale Capacity by Adding Disk RAID sets Each disk array is a RAID set Single appliance scales to 1.2 Petabyte Any node can write to the arrays - access any data from anywhere Dynamic disk file system (DFS) eliminates fragmentation No hot spots or fragmentation
12
Scale Capacity by Adding Disk RAID sets
Scale Performance by Adding SRE® Nodes Independent processing nodes Two Fibre Channel ports each Write data at disk speed Stack up to 16 nodes with linear performance Max throughput of 34.5 TB/Hour Function as a unit Automatically load balances Backup and restore data through any port 01010 10101 Scale Capacity by Adding Disk RAID sets Each disk array is a RAID set System scales to 1.2 Petabyte Single appliance design All node can write to any storage array All nodes access data from any storage array 01010 10101 01010 10101 Dynamic disk file system (DFS) eliminates fragmentation No hot spots or fragmentation
13
Scale Capacity by Adding Disk RAID sets
Scale Performance by Adding SRE® Nodes Independent processing nodes Two Fibre Channel ports each Write data at disk speed Stack up to 16 nodes with linear performance Max throughput of 34.5 TB/Hour Function as a unit Automatically load balances Backup and restore data through any port Scale Capacity by Adding Disk RAID sets Each disk array is a RAID set Single appliance scales to 1.2 Petabyte Any node can write to the arrays - access any data from anywhere Dynamic disk file system (DFS) eliminates fragmentation No hot spots or fragmentation
14
Independent processing nodes
Two Fibre Channel ports each Write data at disk speed Stack up to 16 nodes with linear performance Independent processing nodes Two Fibre Channel ports each Write data at disk speed Stack up to 16 nodes with linear performance
15
That node can only get files or write files to the areas it governs.
…if you want another file, you need another node. Pervious systems designate specific ports to govern specific data volumes, meaning only one port can respond to a specific file request or write data to a given storage region. Only the node dedicated to the particular data segment can respond to requests for its region… This means that any particular node will only have access to a limited set of your stored data, generating potential bottlenecks for you most requested files. Tide of requests from backup clients…
16
SEPATON node examines file, noting its metadata and end points
Here is the SEPATON node with its ContentAware file system writing a file to a storage volume in the grid. Because the file system maps across storage volumes and because the ContentAware design allows SEPATON to know where the file ends and begins, SEPATON avoids fragmentation and keeps the bits of the file together. SEPATON node examines file, noting its metadata and end points 01010 10101 Tide of write requests from backup clients… SEPATON file system bundles the bits of the file together rather than chunking it across the storage grid
17
01010 10101 01010 10101 01010 10101 This is another view of the SEPATON file system. Because it maps across the entire grid of storage volume, the files can be stored smartly in unregimented bit-chunks.
18
10101 01010 10101 01010 10101 01010 10101 01010 Other systems are not as smart and as a result a prone to scattering file segments across the storage environment, lowing read-write performance and increasing the risk of data lose.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.