Presentation is loading. Please wait.

Presentation is loading. Please wait.

A Case Study Julie J. Smith Wachovia Corporation Superdome Implementation at a Glance.

Similar presentations


Presentation on theme: "A Case Study Julie J. Smith Wachovia Corporation Superdome Implementation at a Glance."— Presentation transcript:

1 A Case Study Julie J. Smith Wachovia Corporation Superdome Implementation at a Glance

2 Agenda  Business Challenges  Strategy  Consolidation  Superdome  Executive Summary  Project Scope and Timeline  Migration Philosophy  Migration Phases  Current Status

3 Business Challenges  Last year First Union and Wachovia initiated a merger, creating the nations Fourth largest financial services institution  Corporate initiative to relocate remote data centers to a centralized location  Meet and exceed OCC availability and disaster recovery objectives  Increase stockholder value by reducing total cost of ownership, consolidating current environment, and leveraging new technologies

4 Strategy  Identify projects which had the most to gain by: consolidating to a new hardware platform and reducing support maintenance costs providing high availability architecting a more robust disaster recovery platform  TCO Study was completed by HP and helped to identify which projects would benefit the most from consolidation, upgrades, etc.

5 Why we chose Consolidate  Reduce maintenance cost by eliminating older or obsolete hardware  Reduce number of servers to maintain while actually increasing total TPM’s from 768,000 to 1,600,000  Provide robust hardware environment and increasing high availability components  Provide hardware foundation for disaster recovery initiatives

6 Why we chose Superdome  Scalability  High-Availability features inherent to the hardware  Performance data

7 Hardware Identified for Consolidation  (6) V Class Servers  (10) K Class Servers  (4) N Class Servers  (2) T Class Servers

8 Executive Summary  Migrate 22 servers representing four business units to four Superdome servers (2 production and 2 DEV/DR). Includes over 50 applications residing on 22 servers, approximately 15 Terabytes of storage in production, development, and test  Facilitate current data center move and enable applications to meet merger objectives.

9 Project Scope  Install four Superdome 64000 servers in new data centers  Attach to Next-Generation network infrastructure which is completely redundant  Attach to all SAN storage and reconfigure I/O layout  Implement new backup strategy using TSM  Re-design new High-Availability Clusters and packages  Migrate existing applications to Superdome  Perform full Disaster Recovery tests for each project  Decommission obsolete hardware

10 Timeline May O ctober Just Six Months…. MoratoriumMerger Complete

11 Migration Philosophy Rely on what we know works  HP LVM  MC Service Guard Make improvements that must be made: Network SAN Disk and LVM layout Backup/Recovery software MC Service Guard Basic DR implementation Maintenance strategy

12 Phases of Migration 1. Discovery 2. Design 3. Implementation 4. Migration 5. Support and Maintenance 6. Future

13 Discovery What can improve?  performance  availability  disk and file system layout  backup windows  network infrastructure  system standards What works well? Humm….

14 Design  Four Superdomes in identical configuration  2 Superdomes in Production Cluster at one location  2 Superdomes in Dev/DR Cluster at alternate location  4 partitions each  4 MC Service Guard Clusters

15

16

17 Storage Design  Current layout Some direct attached arrays Some attachment via SAN Outdated EMC Timefinder version used for database replication Logical volumes are not striped and possibly on single spindle Disk array is mirrored and not striped 2 fibre controllers on each host  Target layout All SAN attached storage Install current EMC Timefinder version and rewrite production scripts to replicate database and to backup file systems offline Stripe logical volumes utilizing LVM Distributed options to optimize across multiple spindles Continue with mirrored array configuration 4 Fibre controllers on each partition to attach to storage

18 Volume Group & Logical Volume Layout  Scripts are used to build each environment Run Inq and create device list Pvcreate Vgcreate with alternate links and physical volume groups (PVG) Lvcreate (lvcreate –D y –s g –r N) Newfs & fsck Create vg map files & ftp to alt system  Naming standards Vgpkg10 /pkg

19 Disaster Recovery Strategy  Currently all tape recovery from TSM master at remote location  Future design is still being determined, but is based on Business Unit’s priority and budget.  MetroCluster and Oracle Replication are being evaluated.

20 Current Status  To date, there are 6 production applications that are live  There are over 40 more to go between now and mid-October  DR testing to be complete by October  Performance tests are still being run, but tests show very positive results…. We’re SMOKIN’!

21 Questions or Discussion Thanks for attending


Download ppt "A Case Study Julie J. Smith Wachovia Corporation Superdome Implementation at a Glance."

Similar presentations


Ads by Google