LHC Computing re-costing for

Slides:



Advertisements
Similar presentations
Bernd Panzer-Steindel, CERN/IT WAN RAW/ESD Data Distribution for LHC.
Advertisements

Beowulf Supercomputer System Lee, Jung won CS843.
10-Feb-00 CERN Building a Regional Centre A few ideas & a personal view CHEP 2000 – Padova 10 February 2000 Les Robertson CERN/IT.
LANs and WANs Network size, vary from –simple office system (few PCs) to –complex global system(thousands PCs) Distinguish by the distances that the network.
LHC experimental data: From today’s Data Challenges to the promise of tomorrow B. Panzer – CERN/IT, F. Rademakers – CERN/EP, P. Vande Vyvre - CERN/EP Academic.
6/2/2015Bernd Panzer-Steindel, CERN, IT1 Computing Fabric (CERN), Status and Plans.
18. November 2003Bernd Panzer, CERN/IT1 LCG Internal Review Computing Fabric Overview.
12. March 2003Bernd Panzer-Steindel, CERN/IT1 LCG Fabric status
Storage area Network(SANs) Topics of presentation
Storage Networking Technologies and Virtualization Section 2 DAS and Introduction to SCSI1.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
CERN - European Laboratory for Particle Physics HEP Computer Farms Frédéric Hemmer CERN Information Technology Division Physics Data processing Group.
Online Systems Status Review of requirements System configuration Current acquisitions Next steps... Upgrade Meeting 4-Sep-1997 Stu Fuess.
IT Infrastructure Chap 1: Definition
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
Planning the LCG Fabric at CERN openlab TCO Workshop November 11 th 2003 CERN.ch.
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
1 U.S. Department of the Interior U.S. Geological Survey Contractor for the USGS at the EROS Data Center EDC CR1 Storage Architecture August 2003 Ken Gacke.
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
Spending Plans and Schedule Jae Yu July 26, 2002.
10/22/2002Bernd Panzer-Steindel, CERN/IT1 Data Challenges and Fabric Architecture.
JLAB Computing Facilities Development Ian Bird Jefferson Lab 2 November 2001.
Storage and Storage Access 1 Rainer Többicke CERN/IT.
Ian Bird GDB; CERN, 8 th May 2013 March 6, 2013
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National Laboratory.
CERN IT Department CH-1211 Genève 23 Switzerland Introduction to CERN Computing Services Bernd Panzer-Steindel, CERN/IT.
S.Jarp CERN openlab CERN openlab Total Cost of Ownership 11 November 2003 Sverre Jarp.
IDE disk servers at CERN Helge Meinhard / CERN-IT CERN OpenLab workshop 17 March 2003.
1 LHCC RRB SG 16 Sep P. Vande Vyvre CERN-PH On-line Computing M&O LHCC RRB SG 16 Sep 2004 P. Vande Vyvre CERN/PH for 4 LHC DAQ project leaders.
 The End to the Means › (According to IBM ) › 03.ibm.com/innovation/us/thesmartercity/in dex_flash.html?cmp=blank&cm=v&csr=chap ter_edu&cr=youtube&ct=usbrv111&cn=agus.
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
23.March 2004Bernd Panzer-Steindel, CERN/IT1 LCG Workshop Computing Fabric.
EMC Proven Professional. Copyright © 2012 EMC Corporation. All Rights Reserved. NAS versus SAN NAS – Architecture to provide dedicated file level access.
US ATLAS Tier 1 Facility Rich Baker Deputy Director US ATLAS Computing Facilities October 26, 2000.
10-Jan-00 CERN Building a Regional Centre A few ideas & a personal view CHEP 2000 – Padova 10 January 2000 Les Robertson CERN/IT.
Mass Storage at SARA Peter Michielse (NCF) Mark van de Sanden, Ron Trompert (SARA) GDB – CERN – January 12, 2005.
U.S. ATLAS Computing Facilities U.S. ATLAS Physics & Computing Review Bruce G. Gibbard, BNL January 2000.
01. December 2004Bernd Panzer-Steindel, CERN/IT1 Tape Storage Issues Bernd Panzer-Steindel LCG Fabric Area Manager CERN/IT.
19. November 2007Bernd Panzer-Steindel, CERN/IT1 CERN Computing Fabric Status LHCC Review, 19 th November 2007.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
26. Juni 2003Bernd Panzer-Steindel, CERN/IT1 LHC Computing re-costing for for the CERN T0/T1 center.
GDB Meeting 12. January Bernd Panzer-Steindel, CERN/IT 1 Mass Storage at CERN GDB meeting, 12. January 2005.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
Hall D Computing Facilities Ian Bird 16 March 2001.
LHC collisions rate: Hz New PHYSICS rate: Hz Event selection: 1 in 10,000,000,000,000 Signal/Noise: Raw Data volumes produced.
CASTOR: possible evolution into the LHC era
ALICE Computing Data Challenge VI
Network Attached Storage Overview
Ian Bird WLCG Workshop San Francisco, 8th October 2016
Grid site as a tool for data processing and data analysis
NL Service Challenge Plans
IT-DB Physics Services Planning for LHC start-up
Network Operating Systems (NOS)
LHC experiments Requirements and Concepts ALICE
Direct Attached Storage and Introduction to SCSI
Western Analysis Facility
Bernd Panzer-Steindel, CERN/IT
Lecture 1: Network Operating Systems (NOS)
UK GridPP Tier-1/A Centre at CLRC
The INFN TIER1 Regional Centre
Bernd Panzer-Steindel, CERN/IT
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
Direct Attached Storage Overview
Bernd Panzer-Steindel CERN/IT
Direct Attached Storage and Introduction to SCSI
New strategies of the LHC experiments to meet
OffLine Physics Computing
CERN openlab for DataGrid applications
Development of LHCb Computing Model F Harris
Cluster Computers.
Presentation transcript:

LHC Computing re-costing for 2006-2010 for the CERN T0/T1 center 26. Juni 2003 Bernd Panzer-Steindel, CERN/IT

Bernd Panzer-Steindel, CERN/IT Introduction The costing exercise focuses on the implementation of the LHC computing fabric for the CERN T0 + T1 center. This covers the phase 2 of the LCG project (2006-2008) and the following 2 years (2009-2010) of operation. The following 6 areas are covered : CPU Resources Disk Storage Tape Storage Local Network LAN Outside Network WAN System Administration more details : http://lcg-computing-fabric.web.cern.ch/LCG-ComputingFabric/lhc_computing_cost_re-calculation.htm 26. Juni 2003 Bernd Panzer-Steindel, CERN/IT

Bernd Panzer-Steindel, CERN/IT History October 1999 PASTA II February 2001 Report of the steering group of the LHC computing review ‘Hoffman’ Report February 2002 Task Force 1 Report October 2002 PASTA III March 2003 Re-costing exercise 26. Juni 2003 Bernd Panzer-Steindel, CERN/IT

Bernd Panzer-Steindel, CERN/IT Ingredients To estimate the cost and it’s evolution over the years several input parameters need to be taken into account : Technology evolution (PASTA) Today’s cost reference points of key components (e.g. 2.4 GHz processor, 120 GB disk , etc.) The prediction for the future price development (slope) of the components Estimation of the needed capacity and resources in 2006 and onwards from the experiments The computing architecture and model 26. Juni 2003 Bernd Panzer-Steindel, CERN/IT

Bernd Panzer-Steindel, CERN/IT Requirements The requirements are defined by the key experiment parameters like trigger rates and event sizes Some changes and refinements due to :  Much more experience with data challenges and productions  Better estimates and models  Different strategies e.g. ATLAS will keep one copy of the raw data at CERN while CMS will export the copy to the Tier 1 centers  Optimized model for the ‘staging’ of the equipment 2006-2008 26. Juni 2003 Bernd Panzer-Steindel, CERN/IT

Bernd Panzer-Steindel, CERN/IT Architecture (I) Hardware Software CPU Physical and logical coupling Disk PC Storage tray, NAS server, SAN element Motherboard, backplane, Bus, integrating devices (memory,Power supply, controller,..) Operating system, driver Network (Ethernet, fibre channel, Myrinet, ….) Hubs, switches, routers Cluster World wide cluster Grid middleware Wide area network Level of complexity Batch system, load balancing, Control software, Hierarchical Storage Systems Coupling of commodity building blocks 26. Juni 2003 Bernd Panzer-Steindel, CERN/IT

Bernd Panzer-Steindel, CERN/IT Architecture (II) to external network application servers Generic model of a Fabric network tape servers Local computer farm of the LHC era. Commodity disk CPU and tape servers will be interconnected via a switch with the local and external networks. Commodity networks are becoming available with the required bandwidth. These fabrics will be built using commodity components. This architecture should be able to exploit anything that industry decides is commodity. HEP is also very cost conscious. Architectures have changed since LEP, from mainframes to Unix now to PC farms. disk servers 26. Juni 2003 Bernd Panzer-Steindel, CERN/IT

Bernd Panzer-Steindel, CERN/IT CPU Resources (I) Still focusing on the INTEL ‘deskside’ PC Have to consider additional costs :  infrastructure (racks, cables, console,etc.)  efficiency (batch system, I/O wait, etc.)  market developments = difference between simple boxes and servers Market According to Gartner/Dataquest, the Notebook share in the PC market (Q1 2003) was 33 %. Intel claims that this year the sale of Notebooks will reach 40 million units. Power The power consumption per produced SI2000 is still constant. There are plans (INTEL, TeraHertz) to reduce this in the future, but not convincing yet.  consequences for the upgrade to 2MW power and cooling in the center 26. Juni 2003 Bernd Panzer-Steindel, CERN/IT

Bernd Panzer-Steindel, CERN/IT CPU Resources (II) Processor reference points and slopes from ‘street’ prices, the actual purchases in IT during the last 3 years and PASTA III Processor price/performance evolution 26. Juni 2003 Bernd Panzer-Steindel, CERN/IT

Bernd Panzer-Steindel, CERN/IT CPU Resources (III) Year Resource [ million SI2000 ] Cost [ million CHF ] 2005 2006 3.7 3.9 2007 8.2 3.2 2008 19.1 5.1 2009 25.4 2.0 2010 33.8 2.5 26. Juni 2003 Bernd Panzer-Steindel, CERN/IT

Bernd Panzer-Steindel, CERN/IT Disk Storage (I) Focusing on the commodity mass market  IDE, SATA disks Extra costs :  a few disks are attached to a server, server costs  infrastructure (racks, console,etc.)  efficiency of space usage (OS/FS overheads,etc.) Have to take into account the need for about 10% high-end storage = more expensive (x4)  databases 26. Juni 2003 Bernd Panzer-Steindel, CERN/IT

Bernd Panzer-Steindel, CERN/IT Disk Storage (II) Disk reference points and slopes from ‘street’ prices , the actual purchases in IT during the last 3 years and PASTA III Disk price/performance evolution 26. Juni 2003 Bernd Panzer-Steindel, CERN/IT

Bernd Panzer-Steindel, CERN/IT Disk Storage (III) Year Resource [ PB ] Cost [ million CHF ] 2005 2006 1.0 5.1 2007 2.1 3.4 2008 3.8 3.5 2009 5.0 1.6 2010 6.7 1.4 26. Juni 2003 Bernd Panzer-Steindel, CERN/IT

Bernd Panzer-Steindel, CERN/IT Tape Storage (I) Tape access performance is averaged over the year  needs dedicated, guaranteed resources during CDR of heavy-ion period Tape storage infrastructure :  number of silos for the tapes  new building when the number of silos > 16  maintenance costs per year Technology lifetime is about 5 years  replacement of equipment and re-copy of tapes (considerable expenses)  timing of the technology change is crucial 26. Juni 2003 Bernd Panzer-Steindel, CERN/IT

Bernd Panzer-Steindel, CERN/IT Tape Storage (II) Tape Storage performance infrastructure (tape drives, tape server, etc.) Year Resource [ GB/s ] Cost [ million CHF ] 2005 2006 1.1 1.2 2007 2.3 3.3 2008 3.9 2009 4.4 0.7 2010 0.0 26. Juni 2003 Bernd Panzer-Steindel, CERN/IT

(total available tape capacity) Tape Storage (III) Tape storage : tape media, silos, building, replacement Year Resource [ PB ] (total available tape capacity) Cost [ million CHF ] 2005 2006 6.0 5.3 2007 13.8 8.3 2008 25.1 6.5 2009 35.5 6.4 2010 48.4 10.6 26. Juni 2003 Bernd Panzer-Steindel, CERN/IT

Network Infrastructure (I) Network architecture based on several Ethernet levels in a hierarchical structure Implementation staged between 2005 and 2007 : 20%-50%-100% Designed for a 8000 node fabric completely new backbone based on 10GE equipment, still some uncertainties in this market for high end routers WAN 10 Gigabit Ethernet, 10000 Mbit/s Backbone Multiple 10 Gigabit Ethernet, 200 * 10000 Mbit/s 10 Gigabit Ethernet, 10000 Mbit/s CPU Server Gigabit Ethernet, 1000 Mbit/s Disk Server Tape Server 26. Juni 2003 Bernd Panzer-Steindel, CERN/IT

Network Infrastructure (II) Year Resource [ GB/s ] Cost [ million CHF ] 2005 56 2.2 2006 140 2007 280 4.3 2008 0.9 2009 2010 420 1.6 26. Juni 2003 Bernd Panzer-Steindel, CERN/IT

System Administration (I) The previous approached was based on the model of outsourcing the system administration part and was costed at about 1000 SFr per node. The new model is based on an in-sourced model, where in the first years 7 administrators and 2 managers are responsible for the sysadmin part, increasing to 12 administrators later This new model seems to be more appropriate after the experience from the last years and the fact that the amount of anticipated nodes is ‘only’ in the range of ~4000. This reduces the cost to about 400 SFR per node 26. Juni 2003 Bernd Panzer-Steindel, CERN/IT

System Administration (II) Year Resource [ FTE ] Cost [ million CHF ] 2005 7+2 1 2006 2007 2008 12+2 1.5 2009 2010 26. Juni 2003 Bernd Panzer-Steindel, CERN/IT

Bernd Panzer-Steindel, CERN/IT Wide Area Network (I) Optimistically we should have access to 10 GBit WAN connections already in 2004 The move to 40 GBit is much more unclear The network providers are still undergoing frequent ‘changes’ (mergers, chapter 11) Large over-capacity available Todays 40 GBit equipment is very expensive 26. Juni 2003 Bernd Panzer-Steindel, CERN/IT

Bernd Panzer-Steindel, CERN/IT Wide Area Network (II) Year Resource [ Gbits/s ] Cost [ million CHF ] 2005 2.5 1.0 2006 10 2.0 2007 2008 X *10 2009 40 2010 26. Juni 2003 Bernd Panzer-Steindel, CERN/IT

Bernd Panzer-Steindel, CERN/IT Comparison All units in [ million CHF ] Resource Old 2006-08 New New- Old 2009-10 New - Old CPU+LAN 17.7 19.5 1.8 6.3 6.8 0.5 Disk 11.9 5.6 2.2 2.9 0.7 Tape 22.5 27.8 5.3 19.2 17.6 -1.6 WAN 11.4 6.0 - 4.4 4.0 -2.8 Sysadmin 7.9 3.5 6.6 3.0 -3.6 SUM 65.8 68.7 41.1 34.3 -6.8 Budget 60.0 34.0 * * A bug in the original paper is here corrected 26. Juni 2003 Bernd Panzer-Steindel, CERN/IT

Bernd Panzer-Steindel, CERN/IT Resource changes Resource 2006-08 change 2009-10 CPU [MSI2K] 19 12 % 34 19 % Disk [PB] 3.8 79 % 6.7 58 % Tape [PB] 25 -1 % 48 We are in the process of providing a detailed description and explanation for the resource requirement changes. This will be appended to the note describing the calculations and the used parameters in the re-costing Excel sheets. 26. Juni 2003 Bernd Panzer-Steindel, CERN/IT

Bernd Panzer-Steindel, CERN/IT Summary Better than anticipated price developments for the components Not technology changes, but market changes are the most worrying factors Exercise needs to be repeated regularly (at least once per year) Very fruitful and constructive collaboration between IT and the Experiments 26. Juni 2003 Bernd Panzer-Steindel, CERN/IT