S.Jarp CERN openlab 2003 1 CERN openlab Total Cost of Ownership 11 November 2003 Sverre Jarp.

Slides:



Advertisements
Similar presentations
24-Apr-03UCL HEP Computing Status April DESKTOPS LAPTOPS BATCH PROCESSING DEDICATED SYSTEMS GRID MAIL WEB WTS SECURITY SOFTWARE MAINTENANCE BACKUP.
Advertisements

Cloud Storage in Czech Republic Czech national Cloud Storage and Data Repository project.
Institute for High Energy Physics ( ) NEC’2007 Varna, Bulgaria, September Activities of IHEP in LCG/EGEE.
6/2/2015Bernd Panzer-Steindel, CERN, IT1 Computing Fabric (CERN), Status and Plans.
12. March 2003Bernd Panzer-Steindel, CERN/IT1 LCG Fabric status
NOT FOR PUBLIC DISTRIBUTION State of Minnesota Technology Summary February 24, 2011.
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
Internet Backup Michael White Ross Schneider Jordan Divine.
Site report: CERN Helge Meinhard (at) cern ch HEPiX fall SLAC.
Backup Rationalisation Reorganisation of the CERN Computer Centre Backups David Asbury IT/DS Friday 6 December 2002.
CERN IT Department CH-1211 Genève 23 Switzerland t Next generation of virtual infrastructure with Hyper-V Michal Kwiatek, Juraj Sucik, Rafal.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Why Interchange?. What is Interchange? Interchange Capabilities: Offers complete replacement of CommBridge point-to-point solution with a hub and spoke.
1 The Virtual Reality Virtualization both inside and outside of the cloud Mike Furgal Director – Managed Database Services BravePoint.
14th April 1999Hepix Oxford Particle Physics Site Report Pete Gronbech Systems Manager.
UCL Site Report Ben Waugh HepSysMan, 22 May 2007.
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
Centre de Calcul IN2P3 Centre de Calcul de l'IN2P Boulevard Niels Bohr F VILLEURBANNE
08/06/00 LHCb(UK) Meeting Glenn Patrick LHCb(UK) Computing/Grid: RAL Perspective Glenn Patrick Central UK Computing (what.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
30-Jun-04UCL HEP Computing Status June UCL HEP Computing Status April DESKTOPS LAPTOPS BATCH PROCESSING DEDICATED SYSTEMS GRID MAIL WEB WTS.
Paul Scherrer Institut 5232 Villigen PSI HEPIX_AMST / / BJ95 PAUL SCHERRER INSTITUT THE PAUL SCHERRER INSTITUTE Swiss Light Source (SLS) Particle accelerator.
Nov 1, 2000Site report DESY1 DESY Site Report Wolfgang Friebel DESY Nov 1, 2000 HEPiX Fall
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
Overview of day-to-day operations Suzanne Poulat.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
Workshop KEK - CC-IN2P3 KEK new Grid system 27 – 29 Oct. CC-IN2P3, Lyon, France Day2 14: :55 (40min) Koichi Murakami, KEK/CRC.
Using Virtual Servers for the CERN Windows infrastructure Emmanuel Ormancey, Alberto Pace CERN, Information Technology Department.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
… where the Web was born 11 November 2003 Wolfgang von Rüden, IT Division Leader CERN openlab Workshop on TCO Introduction.
22nd March 2000HEPSYSMAN Oxford Particle Physics Site Report Pete Gronbech Systems Manager.
2-3 April 2001HEPSYSMAN Oxford Particle Physics Site Report Pete Gronbech Systems Manager.
JLAB Computing Facilities Development Ian Bird Jefferson Lab 2 November 2001.
U.S. ATLAS Tier 1 Planning Rich Baker Brookhaven National Laboratory US ATLAS Computing Advisory Panel Meeting Argonne National Laboratory October 30-31,
Storage and Storage Access 1 Rainer Többicke CERN/IT.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National Laboratory.
HELIX Project (Hp Eradication - LInuX integration) A project in SL, ST, IT and PS.
IDE disk servers at CERN Helge Meinhard / CERN-IT CERN OpenLab workshop 17 March 2003.
Cluster Configuration Update Including LSF Status Thorsten Kleinwort for CERN IT/PDP-IS HEPiX I/2001 LAL Orsay Tuesday, December 08, 2015.
ORGANIZING IT SERVICES AND PERSONNEL (PART 1) Lecture 7.
UK Tier 1 Centre Glenn Patrick LHCb Software Week, 28 April 2006.
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
Backup Exec System Recovery. 2 Outline Introduction Challenges Solution Implementation Results Recommendations Q & A.
Computing Division FY03 Budget and budget outlook for FY04 + CDF International Finance Committee April 4, 2003 Vicky White Head, Computing Division.

01. December 2004Bernd Panzer-Steindel, CERN/IT1 Tape Storage Issues Bernd Panzer-Steindel LCG Fabric Area Manager CERN/IT.
CERN IT Department CH-1211 Genève 23 Switzerland t The Tape Service at CERN Vladimír Bahyl IT-FIO-TSI June 2009.
Technical Sales Specialist Software - OS and Applications John R. Moegling Sr. Systems Engineer.
Virtual Server Server Self Service Center (S3C) JI July.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
26. Juni 2003Bernd Panzer-Steindel, CERN/IT1 LHC Computing re-costing for for the CERN T0/T1 center.
Patrick Gartung 1 CMS 101 Mar 2007 Introduction to the User Analysis Facility (UAF) Patrick Gartung - Fermilab.
The status of IHEP Beijing Site WLCG Asia-Pacific Workshop Yaodong CHENG IHEP, China 01 December 2006.
Pledged and delivered resources to ALICE Grid computing in Germany Kilian Schwarz GSI Darmstadt ALICE Offline Week.
GDB Meeting 12. January Bernd Panzer-Steindel, CERN/IT 1 Mass Storage at CERN GDB meeting, 12. January 2005.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
Application Support Environment Based on experience in High Energy Physics at CERN Presented at the UNESCO/CERN Workshop April 2002 Jürgen Knobloch.
PC Farms & Central Data Recording
Grid related projects CERN openlab LCG EDG F.Fluckiger
UK GridPP Tier-1/A Centre at CLRC
LHC Computing re-costing for
הכרת המחשב האישי PC - Personal Computer
Your Next LIMS: SaaS or On-Premise? Presented by:
IT Infrastructure: Hardware and Software
IT Infrastructure: Hardware and Software
CERN openlab for DataGrid applications Setting the Scene F
Management Information Systems: Classic Models and New Approaches
Presentation transcript:

S.Jarp CERN openlab CERN openlab Total Cost of Ownership 11 November 2003 Sverre Jarp

S.Jarp CERN openlab Agenda Day 1 (IT Amphitheatre) Introductory talks: 09:00 – 09:30 Welcome. (W. von Rüden) 09:30 – 10:15 Setting the scene: A Global View of the plans for the LHC Computing Grid – (L.Robertson) 10:15 – 10:45 Coffee break Part 2: 10:45 – 11:15 Review of calculated cost summary for 2003 (S. Jarp) 11:15 – 12:00 Estimated LCG Personnel and Materials costs (2004 – 2008) (B.Panzer) 12:30 – 14:00 Lunch Part 3 14:00 – 14:45 Planning the LCG Fabric at CERN (T. Cass) 14:45 – 15:30 External Testimony: Michael Levine (Pittsburgh Supercomputer Center) 15:30 – 16:00 Coffee break 16:00 – Discussions and conclusion of day 1 (All) Day 2 (IT Amphitheatre MORNING, afternoon) Discussions with HP: 08:45 – 10:45 One-on-one with HP Discussions with Intel: 11:00 – 13:00 One-on-one with Intel Discussions with IBM: 14:00 – 16:00 One-on-one with IBM

S.Jarp CERN openlab AIM Understand Total Cost of Ownership “Linux fabric” in the Computer Centre CPU servers Disk servers, Tape servers + Silos To a “sufficient” level of detail 2003 (July/August) Budget situation Rather than actual sums across all cost centres Personnel + Materials No direct measurement of quality Nor the fraction of capacity actually in use at a given time

S.Jarp CERN openlab Cost of manpower Complicated issue Approach chosen: Take IT salary budget for staff and fellows Including direct benefits (pension fund, health care, etc.) Without counting management overhead Neither inside IT nor CERN Actual number: KCHF Excluded: Associates + Students In general, paid from the materials budget Associates: Some do not necessarily spend most of their time at CERN Students: Avoid very short stays (Summer students)

S.Jarp CERN openlab IT budget categories In the TCO exercise the total IT budgets were initially split into 9 categories: Computer Centre Services: Cost of the Physics Fabric and closely related services, such as internal and external networking, etc. Grid Development/Deployment Openlab Computing Infrastructure (non-physics oriented): Windows, Mail, Web services, Phone exchange, etc. Product Support (mainly for Engineers): Digital electronics, CAD, license office, Solaris support. etc. Controls for physics experiment so-called “on-line computing” Database Services and Applications Communications Infrastructure for the LHC Accelerator Divisional Management

S.Jarp CERN openlab IT Division (total) Materials (KCHF) FTE% of IT total (Materials + FTEs) (Basic) Computer Centre Services12, Grid Development and Deployment Openlab Infrastructure (non-physics) 4, Product Support 1, Controls Databases and Applications 1, Accelerator Communications 3, Divisional Management Total25,

S.Jarp CERN openlab Box count Central Linux Batch “white box” servers1490 Central IDE Disk servers 250 Central PC Tape servers 60 Linux systems in the Computer Centre1800 Linux systems outside the computer centre 1600 Central Windows servers 150 Central Solaris servers 50 IP devices on Campus:28K

S.Jarp CERN openlab Capacity Numbers Other relevant capacity numbers are: Total installed PC CPU capacity: ~1.2 Million SpecInt2000 (SI2K) CPU capacity purchased by August: 400 * 1.6 KSI2K = 640 KSI2K Total disk space: 200 TB Disk space acquired by August: 50 TB at required performance (actually: 65 TB) Tape silos installed: 10 (with 6000 slots each) Tape drives: 50 STK9940B (+ 20 STK9840 with low usage)

S.Jarp CERN openlab Cost of Central Services Batch/Interactive services: Acquisition costs for batch/interactive servers, software licenses and maintenance Managed storage: Tape and disk servers (purchase and maintenance) Computer Centre Infrastructure: Various contract staff contracts; Monitoring and Supervision; Remedy Fabrics: The budget line only covers the CC upgrade this year. The two "Operations" lines: Cover costs associated with the various groups (travel, associates/consultants, workstation purchases, etc.)

S.Jarp CERN openlab Cost of Central Services

S.Jarp CERN openlab Cost of Fabric

S.Jarp CERN openlab Costs beyond the fabric In many cases non-fabric-related costs dominate Campus Network: 2991 KCHF User Support and Quality Insurance: 1265 KCHF Computer security: 741 KCHF Etc. In other words: The fabric benefits from large demands on other services

S.Jarp CERN openlab CPU part of fabric

S.Jarp CERN openlab Disk/tape part of fabric

S.Jarp CERN openlab Summary All-in-all: IT estimates (on average) that it has spent ~(1 + 1) MCHF on basic hardware across CPU servers Disk servers + Tape servers Our calculation shows: Linux fabric in total: 18.6 MCHF CPU part: 10.5 MCHF Disk/tape part: 6.3 MCHF External networking: 1.8 MCHF Cost on top of basic hardware: 14.8 / 2 = 7.4

S.Jarp CERN openlab Backup

S.Jarp CERN openlab (replacement) cost IT main entities Acquisition (CHF) Installation, Maintenance (CHF) Quantity (per unit) CERN totalAcquisition (so far) 2003 CPU server2, SI2K1.2 MSI2K400 Disk server10,0002,0001 TB (1.3)200 TB50 Tape cartridge GB Tape silo150,000 +6,000 slots10- Tape drives (9940B/9840) 30,000/N.A /20 MB/s AFS home/project 80,0001 TB4 TB AFS scratch10,0002,0001 TB