Presentation is loading. Please wait.

Presentation is loading. Please wait.

How Burton Snowboards is Carving Down the OpenStack Trail

Similar presentations


Presentation on theme: "How Burton Snowboards is Carving Down the OpenStack Trail"— Presentation transcript:

1 How Burton Snowboards is Carving Down the OpenStack Trail
Jim Merritt / Systems Engineer / Burton Snowboards Introduce yourself – Burton What to talk a bit about how we got involved in OpenStack and sepcifically how OpenStack Swift has impacted IT Background in financial sector and in scientific research organization in the area of genomics So basically I had experience in the area of “Big Data”

2 How ”big” is big-data? But what is Big Data? Discuss thought here:

3 Starting Point – The Data
~250TB of various structured and unstructured data Databases (ERP, DW, misc) are actually small percentage Consisting mainly of marketing collateral (e.g. video, photos, and other media) Protect all of the data, and allow for data growth “Traditional” methods of data protection becoming expensive both administratively and monetarily How do we recover? Becoming a difficult question to answer Two main objectives of project: Data management Data protection

4 Starting Point – The Infrastructure
Two SAN/NAS storage arrays Disk-to-disk-to-tape data protection architecture Tape library – 120 slots, LTO5 Off-site storage facility for tapes Off-site facility to host disaster recovery hardware Commvault data-protection software Mix of server hardware (Dell, Cisco UCS, HP) Mix of operating systems (Windows Server, Linux, Solaris) Linux distributions SuSE, CentOS, debian VMWare ESXi

5 The Old Way… Storage systems used for both primary storage and as backup target Complicated processes for data protection and even more complicated to recover Relied on “shipping” of LTO tapes to off-site facility Difficult to execute disaster recovery procedure

6 Technical (and not so Technical) Issues
“Traditional” data protection model Lots of data in flight, raw data, intermediate, AUX copies, copies of the copies, … Lots of tapes to manage Tape library/drive maintenance Little De-duplication in use, lots of data in-flight Video and images don’t dedup well Deduplication can be expensive Integration timing between different data silos Primary storage, LTO tape drives (SAN) Network (NDMP, SMB,NFS) Oracle (RMAN->NFS) Data Management/Curation Administrative effort Complicated backup Complicated recovery

7 Planning for change Leverage past experience with “big” data
Petabyte scale data management and protection Concept of “raw” and “intermediate” data Familiarity with several object store solutions Built out test implementations Turns out that our data problems are the same on a smaller scale Large amount of static unstructured data Old data had large value and had to be retained Adapting a mind set that we are “big” data consumers as well. It is not as scary now.

8 Our Solution Deploy OpenStack Swift as backup target
Utilize a remote site as an additional object store location Utilize commodity hardware and networking as appropriate “Archive” old unstructured data Inadvertently have disaster recovery strategy Eliminate or drastically reduce tape management Utilize SwiftStack to reduce deployment and on-going maintenance and management

9 Hardware/Software Implementation
OpenStack Swift implementation with 2 regions, 3 zones, 3 object nodes per zone and 2 proxy-account-container nodes Each zone is its own rack with separate power and network One region is located at our main data center, and one region is location at a co-lo facility Commvault “cloud” libraries created for dedup and non-dedup data SwiftStack utilized for cluster deployment & management SwiftStack CIFS/NFS gateway utilized for “archive” storage access Virtualized system

10 Server Hardware Object Nodes Proxy/Account/Container Nodes
3 - Silicon Mechanics Storform v518.v5P – CentOS 7 64GB RAM 2 E5-2620v3 GB SSD (operating system), TB SATA (object storage) 6 – Silicon Mechanics Storform v518.v4 – CentOS 7 Proxy/Account/Container Nodes 2 – Silicon Mechanics R345.v4 – CentOS 7 128GB RAM 2 x E5-2650v2 2x 250GB (operating system), 3 x 200TB SSD (account/container storage) Netgear 10GbE switch 1 per zone SSL offload / Load-Balance Virtualized - haproxy system, CentOS7 SwiftStack Gateway Virtualized Silcon Mechanics – supermicro Spend some time here about distributions, Ubuntu, CentOS – make sure distributions and versions closely match LSISAS2308 – old nodes LSISAS3008 – new nodes

11 New DP\DR Storage Infrastructure
Commvault media servers connect to cloud library via haproxy Archive storage ingest via CIFS/NFS via the SwiftStack Gateway End-users access archive data via the SwiftStack Gateway Hosted version of SwftStack controller Cluster-facing network and

12 Paradigm Shift New infrastructure required “procedural” shift
“Wild-west,” “data goes anywhere” mentality to a more structured data placement Place data in “dated” structures – put some initial structure to the unstructured data Data marked for archive only moved once into object store Only require a “primary” copy of backup data No “auxiliary copy” created in Commvault for off-site retention Use native database backup methods to write once via CIFS/NFS gateway Oracle RMAN in-testing, MS-SQL production Commserv DR and dedup database backups go into object store for access in case of DR. Backup not sexy…

13 Backup Software Configuration
This is the only additional configuration required for Commvault

14 Network Traffic Swift Input – clients to swift proxy
Swift WAN – between regions

15 A year after initial deployment…
Commvault After initial issues with commvault, this has been working well Commvault version10, SP11 was the first good swift storage capable version We currently have 160TB of backup data in the object store Using default 3-replica policy Currently archive 25TB At this time we move a “dated” folder to a separate container in swift and create read-only CIFS access via the gateway Much easier/reliable recovery process

16 …and next steps Erasure-code container
Archive more data into object store Metadata search Integration with our ELK stack for auditing

17 Thank You Jim Merritt Senior Systems Engineer Burton Snowboards


Download ppt "How Burton Snowboards is Carving Down the OpenStack Trail"

Similar presentations


Ads by Google