CD FY10 Budget and Tactical Plan Review FY10 Tactical Plans for Scientific Computing Facilities / General Physics Computing Facility (GPCF) Stu Fuess 06-Oct-2009.

Slides:



Advertisements
Similar presentations
Distributed Data Processing
Advertisements

CD FY08 Tactical Plan Status FY08 Tactical Plan Status Report for Network Investigations Rick Finnegan April 22, 2008.
Dec 14, 20061/10 VO Services Project – Status Report Gabriele Garzoglio VO Services Project WBS Dec 14, 2006 OSG Executive Board Meeting Gabriele Garzoglio.
RAMIRI2 Prague 2012 Project management and the RI Life Cycle.
Managing the Information Technology Resource Jerry N. Luftman
Chapter 3: The Project Management Process Groups
Installing software on personal computer
Project Execution.
New Challenges in Cloud Datacenter Monitoring and Management
Cost Effort Complexity Benefit Cloud Hosted Low Cost Agile Integrated Fully Supported.
CD FY08 Tactical Plan Status FY08 Tactical Plan Status Report for Network Infrastructure Upgrades Rick Finnegan April 22, 2008.
Release & Deployment ITIL Version 3
Jun 29, 20101/25 Storage Evaluation on FG, FC, and GPCF Jun 29, 2010 Gabriele Garzoglio Computing Division, Fermilab Overview Introduction Lustre Evaluation:
© 2005 Prentice Hall14-1 Stumpf and Teague Object-Oriented Systems Analysis and Design with UML.
Assessment of Core Services provided to USLHC by OSG.
F Run II Experiments and the Grid Amber Boehnlein Fermilab September 16, 2005.
SCD FIFE Workshop - GlideinWMS Overview GlideinWMS Overview FIFE Workshop (June 04, 2013) - Parag Mhashilkar Why GlideinWMS? GlideinWMS Architecture Summary.
Chapter 9 Elements of Systems Design
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
A Lightweight Platform for Integration of Resource Limited Devices into Pervasive Grids Stavros Isaiadis and Vladimir Getov University of Westminster
CD FY09 Tactical Plan Status FY09 Tactical Plan Status Report for Site Networking Anna Jordan April 28, 2009.
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
CD FY10 Budget and Tactical Plan Review FY10 Tactical Plans for Financial Management Valena Sibley October 8, 2009 Tactical plan nameDocDB# FY10 Tactical.
May 8, 20071/15 VO Services Project – Status Report Gabriele Garzoglio VO Services Project – Status Report Overview and Plans May 8, 2007 Computing Division,
Apr 30, 20081/11 VO Services Project – Stakeholders’ Meeting Gabriele Garzoglio VO Services Project Stakeholders’ Meeting Apr 30, 2008 Gabriele Garzoglio.
Lecture 11 Managing Project Execution. Project Execution The phase of a project in which work towards direct achievement of the project’s objectives and.
January LEReC Review 12 – 13 January 2015 Low Energy RHIC electron Cooling Kerry Mirabella Cost, Schedule, Personnel.
Developing & Managing A Large Linux Farm – The Brookhaven Experience CHEP2004 – Interlaken September 27, 2004 Tomasz Wlodek - BNL.
Module 11 Session 11.1 Visual 1 Module 11 Executing and Controlling the Work Session 11.1 Managing Execution: Executing and Controlling the Work.
CD FY08 Tactical Plan Status FY08 Tactical Plan Status Report for Network Infrastructure Upgrades Rick Finnegan April 22, 2008.
9 Systems Analysis and Design in a Changing World, Fourth Edition.
CD FY10 Budget and Tactical Plan Review FY10 Tactical Plans for Database Services Nelly Stanfield October 7, 2009 Database Services3425-v1.
What is SAM-Grid? Job Handling Data Handling Monitoring and Information.
Demonstrators and Pan-European Services Laboratory WP5 session.
BNL Tier 1 Service Planning & Monitoring Bruce G. Gibbard GDB 5-6 August 2006.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
January LEReC Review 12 – 13 January 2015 Low Energy RHIC electron Cooling Kerry Mirabella Cost, Schedule, Personnel.
Trusted Virtual Machine Images a step towards Cloud Computing for HEP? Tony Cass on behalf of the HEPiX Virtualisation Working Group October 19 th 2010.
February 20, 2006 Nodal Architecture Overview Jeyant Tamby 20 Feb 2006.
Overview of RUP Lunch and Learn. Overview of RUP © 2008 Cardinal Solutions Group 2 Welcome  Introductions  What is your experience with RUP  What is.
First Look at the New NFSv4.1 Based dCache Art Kreymer, Stephan Lammel, Margaret Votava, and Michael Wang for the CD-REX Department CD Scientific Computing.
1 Advanced Software Architecture Muhammad Bilal Bashir PhD Scholar (Computer Science) Mohammad Ali Jinnah University.
CD FY10 Budget and Tactical Plan Review FY10 Tactical Plans for Database Services [Presenter’s Name] [Date] Database Services3425-v1.
CD FY09 Tactical Plan Status FY09 Tactical Plan Status Report for Neutrino Program (MINOS, MINERvA, General) Margaret Votava April 21, 2009 Tactical plan.
LCG and Tier-1 Facilities Status ● LCG interoperability. ● Tier-1 facilities.. ● Observations. (Not guaranteed to be wry, witty or nonobvious.) Joseph.
Eileen Berman. Condor in the Fermilab Grid FacilitiesApril 30, 2008  Fermi National Accelerator Laboratory is a high energy physics laboratory outside.
Sep 25, 20071/5 Grid Services Activities on Security Gabriele Garzoglio Grid Services Activities on Security Gabriele Garzoglio Computing Division, Fermilab.
Experiments in Utility Computing: Hadoop and Condor Sameer Paranjpye Y! Web Search.
Software Architecture Architecture represents different things from use cases –Use cases deal primarily with functional properties –Architecture deals.
1 Open Science Grid: Project Statement & Vision Transform compute and data intensive science through a cross- domain self-managed national distributed.
CD FY08 Tactical Plan Status FY08 Tactical Plan Status Report for Network Investigations Rick Finnegan April 22, 2008.
FIFE Architecture Figures for V1.2 of document. Servers Desktops and Laptops Desktops and Laptops Off-Site Computing Off-Site Computing Interactive ComputingSoftware.
Development of the Fermilab Open Science Enclave Policy and Baseline Keith Chadwick Fermilab Work supported by the U.S. Department of.
B U D I L U H U R U N I V E R S I T Y POST GRADUATE PROGRAM MAGISTER ILMU KOMPUTER (M.Kom) IS PROJECT MANAGEMENT.
9 Systems Analysis and Design in a Changing World, Fifth Edition.
CD FY10 Budget and Tactical Plan Review FY10 Tactical Plan for SCF / System Administration DocDB #3389 Jason Allen 10/06/2009.
IS&T Project Reviews September 9, Project Review Overview Facilitative approach that actively engages a number of key project staff and senior IS&T.
GPCF* Update Present status as a series of questions / answers related to decisions made / yet to be made * General Physics Computing Facility (GPCF) is.
CD FY09 Tactical Plan Review FY09 Tactical Plans for Computing Infrastructure Igor Mandrichenko 9/24/2008.
CD FY08 Tactical Plan Status FY08 Tactical Plan Status Report for Networks Physical Infrastructure Rick Finnegan April 22, 2008.
Computing Infrastructure – Minos 2009/12 ● Downtime schedule – 3 rd Thur monthly ● Dcache upgrades ● Condor / Grid status ● Bluearc performance – cpn lock.
Minos Computing Infrastructure Arthur Kreymer 1 ● Young Minos issues: – /local/stage1/minosgli temporary reprieve – need grid shared area – Retiring.
Minos Computing Infrastructure Arthur Kreymer 1 ● Grid – Going to SLF 5, doubled capacity in GPFarm ● Bluearc - performance good, expanding.
Bob Jones EGEE Technical Director
FY09 Tactical Plan Status Report for Site Networking
Physical Architecture Layer Design
FY10 Tactical Plans for Enterprise Information Systems
FY10 Tactical Plans for Enterprise Information Systems
Presentation transcript:

CD FY10 Budget and Tactical Plan Review FY10 Tactical Plans for Scientific Computing Facilities / General Physics Computing Facility (GPCF) Stu Fuess 06-Oct-2009 FY10 Tactical Plan for GPCFCD DocDB # 3329

What is the GPCF? Task force charged 7/25/09 to develop a facility to meet computing needs of Intensity and Cosmic Frontier programs –Collect requirements (largely following NuComp work) –Architect a system –Plan for acquisition and installation in FY10 –Evolve current facilities into GPCF –Replace similar functions from FNALU Being done with “borrowed” effort from 5 departments Design is not yet done, so crude M&S estimates at this time CD FY10 Budget and Tactical Plan Review 2

CD FY10 Budget and Tactical Plan Review 3 FermiGrid Proposed Facilities for FY10 GPCF FermiCloud OSG T-3 CMS CF SNS GRID REX FEF Intensity Frontier Current FNALU Scientific Users DMS Grid Development, Integration, Testing for GPCF (including Storage) FEF HPPC CET REX SSE glidein WMS testing OSG GRID Storage Development, Integration, Testing for FermiGrid Cloud Studies PBS Testing Storage Testing Storage Investigations Development Virtualization Investigations DMS Other Exps Other Operat ions No Current Facilities Uses Stakeholder Storage Flavor Analysis Experiment Development Other

Guiding principles for the GPCF Use virtualization Training ground and gateway to the Grid No undue complexity – user and admin friendly Model after the CMS LPC where sensible Expect to support / partition the GPCF for multiple user groups CD FY10 Budget and Tactical Plan Review 4

Components of the GPCF Interactive Nodes –VMs dedicated to user groups, plus “fnalu” general VMs Local Batch Nodes –VMs of sufficient number for job testing, leading to eventual Grid submissions Server / Service Nodes –VM homes for group-specific or system services Storage –BlueArc, dCache, or otherwise (Lustre, HDFS?) Network infrastructure –Work with LAN to make sure adequate resources CD FY10 Budget and Tactical Plan Review 5

CD FY10 Budget and Tactical Plan Review 6 FY10 Tactical Plan for SCF/GPCF Tactical Plan Leaders: Stu Fuess, Eileen Berman Service Activity List SCF/GPCF/Operations SCF/GPCF/Support Project Activity List SCF/GPCF/Management SCF/GPCF/Integration and Development GPCF is a new activity tree. Will describe Management and Integration and Development as the precursors to the service activities.

CD FY10 Budget and Tactical Plan Review 7 Project Activity: SCF/GPCF/Management Goals Related to this Activity [from Tactical Plan] –Specify GPCF architecture and design Task force project New goal, high priority – other work follows –Determine GPCF governance How do we run a facility with many contributors? New goal, high priority Key Milestones –Oct ‘09 : WBS Tasks for Integration and Development listed and assigned –Oct ‘09 : FNALU Transition Plan Agreement with CSI on split of responsibilities User town meeting Project Documentation: DocDB # 3453 (for all SCF/GPCF activities) Issues and Risks (specific to this activity, includes allocation impact) 1.Design is largely “management” because of level of people involved in early phases of project. Difficulty in getting task force together and converging slows design progress. 2.Multiple possible architecture choices may lead to excessive debate, delaying implementation. 3.October is busy month!

CD FY10 Budget and Tactical Plan Review 8 Project Activity: SCF/GPCF/Integration and Development Goals Related to this Activity [from Tactical Plan] –Provide a robust, stable and secure general facility to enable proper data management and analysis for the intensity and cosmic frontier program. –Subsume similar functionality currently offered on FNALU Key Milestones –Nov ‘09 : Operational Facility Embryonic interactive, batch, and storage functionality High impact for Nova –Nov ‘09 : Monitoring Infrastructure At level of Run2 facilities –Spring ‘10 : Phase 2 upgrades Additional functionality and capacity Issues and Risks (specific to this activity, includes allocation impact) 1.Procurement schedule dependent upon design completion 2.Delay in procurement will lead to delay in operation. 3.Aggressive schedule Implication is that we may not get it exactly correct the first time, hence “Phase 2”

CD FY10 Budget and Tactical Plan Review 9 Ripple Effect on Shared IT Services FermiCloud development platform will support the GPCF reliance on VMs Storage distributed among BlueArc, dCache, and local disks Will need additional network ports GPCF physical location is TBD

CD FY10 Budget and Tactical Plan Review 10 FY10 FTE and M&S: Request vs. Allocation Level 0/1 Activity: SCF/GPCF Details of SWF... Tactical Plan calls for total of 2.1 FTE-year, but BLIs have been entered assuming SWF contribution within various department activities. Expected (see):.3 FTE from DMS (.025 MC,.02 GO,.25 MB).5 FTE from GRID (.1 EB,.2 GG).5 FTE from FEF (-).5 FTE from REX (-).3 FTE from SCF Quadrant (.3 SF) Working out how to budget GPCF effort among departments

CD FY10 Budget and Tactical Plan Review 11 FY10 FTE and M&S: Request vs. Allocation Level 0/1 Activity: SCF/GPCF Details of M&S QtyDescriptionUnit Cost Extended Cost Fund Type 16Interactive Nodes$3,300$52,800EQ 32Local Batch Nodes$3,100$99,200EQ 4Application Servers$3,900$15,600EQ 3Disk Storage$22,000$66,000EQ 1Storage Network$10,000 EQ 1Network Infrastructure$40,000 EQ 1Racks, PDUs, etc$3,000 EQ

CD FY10 Budget and Tactical Plan Review 12 Impact of Preliminary Allocation Current M&S request is for approximately 2x the requests compiled by the NuComp task force –Probably not a bad assumption that they underestimated Some extra capacity allows variation and trial in development phase –But FermiCloud, if it exists, can provide some of this functionality Some redundancy with BlueArc requests in NVS worksheets – need to resolve this Scaling M&S node allocation back is acceptable if development flexibility is provided via FermiCloud Scaling M&S disk allocation back is acceptable once overlap of BlueArc requests is understood

CD FY10 Budget and Tactical Plan Review 13 Summary of Past Action Items For the record… There are past action items which will be partially addressed by the GPCF CDACTIONITEM-210 –How are running non-neutrino, non-collider experiments handled? CDACTIONITEM-154 –Review mission and need for FNALU, including known critical roles and apps

CD FY09 Tactical Plan Status 14 Tactical Plan Summary GPCF is a new computing facility addressing the needs of the “small” experiments and the general scientific community –It is constructed of VMs –Storage, and the requirements generated by the usage patterns, is the most difficult part of the design It is currently in the design phase, so budget estimates are somewhat rough We see an early implementation phase as soon as M&S funds are available, followed by a Spring upgrade phase targeted at broader needs