Download presentation
Presentation is loading. Please wait.
Published byAnne Tucker Modified over 9 years ago
1
… where the Web was born 11 November 2003 Wolfgang von Rüden, IT Division Leader CERN openlab Workshop on TCO Introduction
2
LHC will collide beams of protons at an energy of 14 TeV Using the latest super-conducting technologies, it will operate at about – 270 0 C, just above absolute zero of temperature. With its 27 km circumference, the accelerator will be the largest superconducting installation in the world. What is LHC? LHC is due to switch on in 2007 Four experiments, with detectors as ‘big as cathedrals’: ALICE ATLAS CMS LHCb
3
A particle collision = an event Provides trivial parallelism, hence usage of simple farms Physicist's goal is to count, trace and characterize all the particles produced and fully reconstruct the process. Among all tracks, the presence of “special shapes” is the sign for the occurrence of interesting interactions. The LHC Data Challenge
4
Starting from this event… You are looking for this “signature” Selectivity: 1 in 10 13 Like looking for 1 person in a thousand world populations! Or for a needle in 20 million haystacks! The LHC Data Challenge
5
LHC data (simplified) 40 million collisions per second After filtering, 100 collisions of interest per second A Megabyte of digitised information for each collision = recording rate of 0.1 Gigabytes/sec 10 11 collisions recorded each year = 10 Petabytes/year of data CMSLHCbATLASALICE 1 Megabyte (1MB) A digital photo 1 Gigabyte (1GB) = 1000MB A DVD movie 1 Terabyte (1TB) = 1000GB World annual book production 1 Petabyte (1PB) = 1000TB 10% of the annual production by LHC experiments 1 Exabyte (1EB) = 1000 PB World annual information production
6
LHC data LHC data correspond to about 20 million CDs each year Concorde (15 Km) Balloon (30 Km) CD stack with 1 year LHC data! (~ 20 Km) Mt. Blanc (4.8 Km) Where will the experiments store all of these data?
7
LHC data processing LHC data analysis requires a computing power equivalent to ~ 70,000 of today's fastest PC processors Where will the experiments find such a computing power?
8
Expected LHC computing needs Moore’s law (based on 2000 data) Networking: 10 – 40 Gb/s to all big centres today
9
Computing at CERN today High-throughput computing based on reliable “commodity” technology More than 1500 dual processor PCs More than 3 Petabyte of data on disk (10%) and tapes (90%) Nowhere near enough!
10
The new computer room is being populated… CPU servers Disk servers Tape silos and servers Computing at CERN today
11
CPU servers Disk servers Tape silos and servers …while the existing computer centre is being cleared for renovation… Computing at CERN today …and an upgrade of the power supply from 0.5MW to 2.5MW is underway.
12
What will happen next ? New CERN management takes over in January with reduced top level management, ie more responsibilities move to the Departments (replacing Divisions) Only 3 people above the departments (CEO, CFO, CSO) New IT Department will also include Administrative Computing (AS Division) and some computing services now in ETT EGEE project will start in April 2004 with substantial funding from the European Union The IT department will have over 400 members (includes about 100 non-staff)
13
What is new ? Planning is now based on P+M, ie the cost of services will include personnel and overhead Personnel plan will be based on budget rather than head count. This allows for re-profiling of the staff skills. Outsourcing will continue, but if justified by a business case, insourcing is possible. TCO considerations are becoming a real option, but our purchasing rules don’t make life easy. If “quality” should be taken into account, tender documents need to contain objectively measurable criteria, ie the bottom line is a number. Will require Finance Committee approval
14
CERN’s IT strategy so far Use commodity equipment wherever possible (compute servers, disk servers, tape servers) Buy at the “sweet spot” All based on RH Linux (for how long?) “Big stuff” left are the tape robots Other non-commodity equipment: –Machines running the AFS and Database services –Systems for administrative computing –Solaris-based development cluster as secondary platform Equipment needed by experiments is in addition, but not under IT’s responsibility
15
Questions to our partners: We would like answers to the following questions: –Are there any cost-effective alternatives? –Can you (industry) provide convincing arguments that “paying more is cheaper”? –Are there examples we can look at? –Does CERN have the right skill levels or are we having too many highly skilled and expensive people? –What is the added value of your proposition? Is physics computing the best target or shall we rather look at the technical and administrative computing (<50% of new department is for physics)? Could you consider offering solutions which deviate from your standard products, possibly with the help of 3 rd parties?
16
Summary We take the TCO approach seriously New possibilities exist with P+M We need measurable criteria to deviate from our “lowest cost” purchasing principle Thank you for your interest in the topic We are looking forward to your proposal and advice
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.