Lead from the front Texas Nodal 1 Texas Nodal Market Implementation Program Infrastructure Update October 22, 2007.

Slides:



Advertisements
Similar presentations
IT Asset Management Status Update 02/15/ Agenda What is Asset Management and What It Is Not Scope of Asset Management Status of Key Efforts Associated.
Advertisements

Course: e-Governance Project Lifecycle Day 1
State Data Center Re-scoped Projects With emphasis on reducing load on cooling systems in OB2 April 4, 2012.
Chapter 4 Infrastructure as a Service (IaaS)
June 26, 2008 TAC Texas Nodal Market Implementation: Program Update Jerry Sullivan.
Introduction to DBA.
State Data Center Re-scoped Projects With emphasis on reducing load on cooling systems in OB2 April 4, 2012.
Microsoft Virtual Server 2005 Product Overview Mikael Nyström – TrueSec AB MVP Windows Server – Setup/Deployment Mikael Nyström – TrueSec AB MVP Windows.
VIRTUALIZATION AND YOUR BUSINESS November 18, 2010 | Worksighted.
Commonwealth of Massachusetts Statewide Strategic IT Consolidation (ITC) Initiative ITD Virtualization and Shared Services Executive Briefing Presentation.
1 May 12, 2010 Federal Data Center Consolidation Initiative.
CredoGov VDI Introduction James Gunn
January 8, 2009 TAC Texas Nodal Program Implementation: P rogram Update Ron Hinsley.
Bob Thome, Senior Director of Product Management, Oracle SIMPLIFYING YOUR HIGH AVAILABILITY DATABASE.
Exchange 2000 on Windows 2000 Data Center The Opportunity for Server Consolidation.
Server Virtualization: Navy Network Operations Centers
Virtualization Lab 3 – Virtualization Fall 2012 CSCI 6303 Principles of I.T.
© Property of ERCOT /8/20061 TPTF ERCOT 30/60/90 day Plan Raj Chudgar, Nodal Program Manager February 20, 2006.
IT Infrastructure Chap 1: Definition
1 Texas Nodal Texas Nodal Program Implementation TPTF June 6, 2006 Agenda.
1 TDTWG Report to RMS SCR 745 ERCOT Unplanned System Outages Wednesday, July 13th.
Lead from the front Texas Nodal 1 Texas Nodal Market Implementation Program Implementation Scenarios September 6, 2006.
ERCOT and Utilicast Public Document ERCOT Board of Directors Meeting April 22, 2009 Nodal Program Oversight Report 10 – Infrastructure.
Service Transition & Planning Service Validation & Testing
IO – CART Project Status Protocol Revision Subcommittee Update 06/22/06.
Market Implementation Board Presentation October 1 Texas Nodal Market Implementation ERCOT Board of Directors Meeting Program status.
Texas Nodal Program ERCOT Readiness Update TPTF March 31, 2008.
The Minnesota State Colleges and Universities system is an Equal Opportunity employer and educator. Information Technology Enterprise Strategic Investment.
Lead from the front Texas Nodal 1 Texas Nodal Market Implementation TPTF Nodal Status Update November 6, 2006.
Market Implementation Board Presentation July – v1.1F 1 Texas Nodal Market Implementation ERCOT Board of Directors Meeting Program.
March 20, 2008 TPTF CIM Update Raj Chudgar Program Director for Market Redesign.
Texas Nodal Timeline Options, TPTF Jul 11, Timeline Options Trip Doggett TPTF
September 8, 2008 TPTF Nodal Core Projects Updates Nodal Project Managers.
Server Virtualization & Disaster Recovery Ryerson University, Computer & Communication Services (CCS), Technical Support Group Eran Frank Manager, Technical.
February 5, 2009 Technical Advisory Committee Meeting Texas Nodal Program Implementation: Program Update Trip Doggett.
January 15, 2008 Monthly Board of Directors Meeting Texas Nodal Market Implementation Program Update Jerry Sullivan.
Lead from the front Texas Nodal 1 Integrated ERCOT Readiness and Transition (IRT) TPTF - November 6 th, 2006.
September 25, 2007 TPTF Meeting Texas Nodal Program Update Jerry Sullivan.
VMware vSphere Configuration and Management v6
High Availability in DB2 Nishant Sinha
PNISG Update June Xoserve’s UKLP Assessment Principles 1.Maintain current delivery plans where possible and appropriate e.g. don’t just push all.
Lead from the front Texas Nodal 1 Texas Nodal Market Implementation ERCOT Board Of Directors – August 15, 2006 TPTF Meeting August.
Nodal Program Update Mike Cleary Sr. VP and Chief Technology Officer.
TPTF CIM Health Check Raj Chudgar Program Director for Market Redesign.
Lead from the front Texas Nodal 1 Texas Nodal Market Implementation Program Implementation Scenarios September 6, 2006.
State of Georgia Release Management Training
A Case Study Julie J. Smith Wachovia Corporation Superdome Implementation at a Glance.
1 TDTWG Report to RMS Recommended Solutions for SCR 745 ERCOT Unplanned System Outages and Failures Wednesday, August 10th.
Component 8/Unit 9aHealth IT Workforce Curriculum Version 1.0 Fall Installation and Maintenance of Health IT Systems Unit 9a Creating Fault Tolerant.
Lead from the front Texas Nodal 1 Texas Nodal Market Implementation Integration Update October 24, 2006 Daryl Shing, Enterprise.
August 11, 2008 Texas Nodal - Integrated Schedule Update Integrated Schedule Update for TPTF Janet Ply.
Lead from the front Texas Nodal 1 Early Delivery System Testing Environments & Planning – Involving Market Participants TPTF May.
October 2, 2008 TAC Texas Nodal Market Implementation: Program Update Ron Hinsley.
Capacity Planning in a Virtual Environment Chris Chesley, Sr. Systems Engineer
Practical IT Research that Drives Measurable Results Mitigate Costs & Maximize Value with a Consolidated Network Storage Strategy.
June 5, 2008 TAC Texas Nodal Market Implementation Program Update Jerry Sullivan.
Practical IT Research that Drives Measurable Results 1Info-Tech Research Group Get Moving with Server Virtualization.
1 EIT 2.2 Is your company missing out on the cost-savings opportunities offered by data center consolidations? Andy Abbas Co-Founder and Vice President.
Lead from the front Texas Nodal 1 TDWG Nodal Update – June 6, Texas Nodal Market Implementation Server.
A Measured Approach to Virtualization Don Mendonsa Lawrence Livermore National Laboratory NLIT 2008 by LLNL-PRES
U N C L A S S I F I E D LA-UR Leveraging VMware to implement Disaster Recovery at LANL Anil Karmel Technical Staff Member
CTS Quarterly Customer Meeting CTS Disaster Recovery (DR) Project October 22, 2014.
October 13, 2008 TPTF Nodal Status Report: Program Update Ron Hinsley.
EMMS Infrastructure. 2 2 Scope Perform an analysis and comparison of the costs and risks associated with the three most plausible courses of action for.
Managing Clouds with VMM
Storage Trends: DoITT Enterprise Storage
Cloud Computing Architecture
Disaster Recovery at UNC
PerformanceBridge Application Suite and Practice 2.0 IT Specifications
Presentation transcript:

Lead from the front Texas Nodal 1 Texas Nodal Market Implementation Program Infrastructure Update October 22, 2007

Lead from the front Texas Nodal INF: Project Summary - Initial Charter with Approved Changes Project area: Infrastructure Description: Provision of development, testing, EDS and production environments across the Program Vendor(s): IBM, EMC, OracleProject Manager: David Forfia Key deliverables/short term deliverables: –Hardware specifications –Hardware procurement –Data center capacity resolution –IT Services Catalogue –Service Level Agreements for all Nodal projects –Project development & test (FAT) environments –Integration testing (SAT) environments –EDS environments –Production environments –Market Participant Identity Management (new) –Release Management processes (from INT) –Oracle 10g upgrade for EDW (new) –High availability monitoring environment (new) Key Assumptions: –Infrastructure capacity can be incrementally added as the project progresses using IBM’s capacity upgrade on-demand model –Data center capacity issues will be resolved in the next 90 days Challenges/Risks: –Existing Data Center capacity (power) Comments: –IT Operations will be the first ERCOT function to transition to Nodal operations, starting with setting up development environments 2 TPTF Nodal Update 10/22/07

Lead from the front Texas Nodal INF: Project Summary - Initial Delivery Plan EDS 3 Release NMMS Computing Infrastructure Q2 2008Q3 2009Q4 2008Q3 2008Q2 2007Q1 2007Q4 2006Q4 2007Q1 2008Q3 2006Q1 2009Q EDS 3 Design, Build, Pre-FATFATITEST 12/01/08 Real Time Operations GO LIVE ITESTBuildFAT Requirements, Conceptual Design EDS 4 EDS 4 MP SAT EDS 3 Data Validation Planning UI FATITEST Common model update process Conceptual Design Build 6/30/08 LMP Market Readiness Criteria MET Q Operations UI 3/31/08 Single Entry Model GO LIVE 12/08/08 Day Ahead Market/CRR GO LIVE Zonal Shutdown GO/NO GO Storage Software Data Center Capacity Design to Min Requirements Database Licensing Strategy Portal/Integration Licensing Strategy Decommissioning plan Data center virtualization Colocation Existing Data Center Upgrade New Data Center Facility EDS 4 Release Capacity Upgrade Dev Disaster Site #1 Design to Min Requirements Prod #1 Prod #2 Dev DR #2 Capacity needs met? Capacity Planning Capacity Upgrade on Demand Enabled Sufficient Power Recovered ? Capacity Upgrade Initial Storage Upgrade Capacity Planning Capacity Upgrade 3 TPTF Nodal Update 10/22/07

Lead from the front Texas Nodal INF: Project Summary – Actual Delivery Plan is slightly later than planned EDS 3 Release NMMS Computing Infrastructure Q2 2008Q3 2009Q4 2008Q3 2008Q2 2007Q1 2007Q4 2006Q4 2007Q1 2008Q3 2006Q1 2009Q EDS 3 Design, Build, Pre-FATFATITEST 12/01/08 Real Time Operations GO LIVE ITESTBuildFAT Requirements, Conceptual Design EDS 4 EDS 4 MP SAT EDS 3 Data Validation Planning UI FATITEST Common model update process Conceptual Design Build 6/30/08 LMP Market Readiness Criteria MET Q Operations UI 3/31/08 Single Entry Model GO LIVE 12/08/08 Day Ahead Market/CRR GO LIVE Zonal Shutdown GO/NO GO Storage Software Data Center Capacity Design to Min Requirements Database Licensing Strategy Portal/Integration Licensing Strategy Decommissioning plan Data center virtualization Colocation Existing Data Center Upgrade New Data Center Facility EDS 4 Release Capacity Upgrade Dev Disaster Site #1 Design to Min Requirements Prod #1 Prod #2 Dev DR #2 Capacity needs met? Capacity Planning Capacity Upgrade on Demand Enabled Sufficient Power Recovered ? Capacity Upgrade Initial Storage Upgrade Capacity Planning Capacity Upgrade 4 TPTF Nodal Update 10/22/07

Lead from the front Texas Nodal Data Center virtualization was the only viable strategy to make the Nodal timeline Move to a Collocation Site –RFP issued in October 2006 Insufficient capacity available to support ERCOT specialized needs Expand Existing Data Centers –Taylor Lead times for core equipment longer than Nodal program –Austin Existing facilities already expanded to maximum capacity Long term viability of the facility not determined Acquire a new Data Center Facility –Lead times for acquisition and relocation outside Nodal timelines –Currently being explored with the viability of the Austin facility 5 TPTF Nodal Update 10/22/07

Lead from the front Texas Nodal Expanding existing data center capacity is an integrated process There are 3 components which are balanced to ensure a reliable data center –Standby Generator Capacity –Uninterruptible Power Supply (UPS) capacity –Data Center Air Conditioning (DCAC) capacity The maximum capacity for equipment is determined by the minimum carrying capacity of any one of the components. We have taken all steps possible to maximize the current capacity of the data centers to running at the available capacity of the UPS systems in each site. 6 All possible near term facility upgrades have been completed.

Lead from the front Texas Nodal The growth in Nodal server deployments and capacity was correctly forecast 7 TPTF Nodal Update 10/22/07

Lead from the front Texas Nodal Majority of the roadmap is completed, but not all assumptions were right Achievements to date Power Recovery  Executed the Enterprise Architecture Power Recovery Plan Development Storage Retired Development Virtualized Domain Restructured Quality Assurance Moved to ACC Retired unused equipment ½ Server refreshes EMS & Non-EMS Test/Prod Virtualization ¾Database Hosting Refresh  Identified additional compression activities Relocated Development databases servers to Blue Building Austin SAN Refresh Taylor SAN Refresh Database Server Clustering ACC to dedicated UPS Remote access server farm redesign o Application Server Refresh o Self cooled equipment racks (ordered) Key assumptions which were invalid Power consumption server would drop exponentially Power consumption per CPU has declined almost as much as assumed Server memory power consumption has offset power savings in CPU power consumption Nodal redundancy requirements would mirror Zonal The nodal systems require more active/passive and active/active deployments than the current Zonal market. The required level of redundancy and recoverability for the nodal systems was not fully understood when the projections were made in February Nodal environment requirements would mirror Zonal Integrating a large number of best of breed solution requires more environments to successfully develop the integration points. Market participants required structured and unstructured testing environments to complete their development activities. 8 TPTF Nodal Update 10/22/07

Lead from the front Texas Nodal Server consolidation timeline was constrained to minimize risks We will do this with the minimum disruption We have a plan to: –Minimize the risk –Maximize the benefit –Lower overall costs –Safeguard Market Operations –Improve service levels –Not affect Texas Set 3.0 However, there are always risks to be aware of in server migration We are working with all the project managers and ERCOT committees to reduce risk and to optimize the timing. In the process of server consolidation, one production migration was deferred and successfully rolled back two. –The net effect was a delay in final migrations and power recovery by 5 weeks. 9 TPTF Nodal Update 10/22/07

Lead from the front Texas Nodal Migration Metrics Servers Starting TotalDecomRetired% Remaining % Databases Starting TotalMigrated Retired/ Refresh% Remaining % Applications Total FilesMigratedRemaining% Remaining % Scripts ReviewedRemaining% Remaining % Production Database Storage (in GB) Pre-Migration Bytes Used Post Migration Bytes UsedReclaimed Storage Savings 12,0808,4803,24036% Servers Annual maintenance contracts will be cancelled on the retired database and application servers Servers will be made available on the secondary market to recoup their residual values Databases Unused or underused databases where eliminated resulting in additional licenses for use at Nodal vendor locations Database Storage Properly sizing the databases and compressing the data files for data which has been removed is resulting in a 36% reduction in used storage in production. Server consolidation will result in lower expenses during Nodal and after Nodal implementation 10 TPTF Nodal Update 10/22/07

Lead from the front Texas Nodal 11 System requirements are driven by project detailed design documents Each project team has an assigned architect who is responsible for the architecture for the project’s deliverables that are consolidated into a deployment diagram by IDA that become individual work requests for operations to deploy the systems

Lead from the front Texas Nodal 12 Market trials and Integration testing will drive changes to the environments The assumption in the infrastructure plan is that the initial deployment specifications will have to be changed. The infrastructure technologies selected as the core for the Nodal system were picked because they adapt well to change.

Lead from the front Texas Nodal 13 TPTF Nodal Update 10/22/07 Provides the ability to scale up to the maximum capacity of the system to meet usage demands System can recover from multiple component failures (CPU/Memory/Power Supply/ IO) with spare capacity within the system. System availability above the 99.9% threshold. Provides the ability to balance the load across all applications running on the system. Minimum power usage configuration. Critical Path Mitigation Strategies Capacity issues – Scale Up Option Scale up options are implemented on the largest computing systems in the data center. Extra capacity is available to enable when necessary inside the system or can be added without system down time.

Lead from the front Texas Nodal 14 TPTF Nodal Update 10/22/07 Requires duplicate computing resources on two separate physical servers. Provides the ability to scale up to the maximum capacity of all systems in the cluster to meet usage demands. Classic Windows / Linux capacity expansion strategy typically with a load balancing appliance. Provides the ability to do maintenance on a server without impacting the system. System availability above 99.99% Critical Path Mitigation Strategies Capacity issues – Scale Out Option Scale out options are implemented on the smaller computing systems in the data center. Processing load is split across multiple systems to meet the business requirements. ERCOT will utilize both a scale up and scale out strategy to meet the business requirements of Nodal

Lead from the front Texas Nodal Critical Path Mitigation Strategies Procurement and Architectural Delays RiskMitigation Strategy Long procurement lead timesDefine system requirements as soon as possible and place orders. Utilize existing systems or virtual machines where appropriate Late Technical Architecture design documentsPreorder unassembled equipment and have ERCOT staff built to specification Authorize overtime to build systems Pre-build standard server configurations based upon service catalogue and assign to projects as requirements become known Server consolidation decommissions behind schedule Run data center in the safe “buffer” zone of capacity 15 TPTF Nodal Update 10/22/07

Lead from the front Texas Nodal Questions TPTF Nodal Update 10/22/07 16