Presentation is loading. Please wait.

Presentation is loading. Please wait.

Finnish DataGrid meeting, CSC, Otaniemi,28.8.2000 V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.

Similar presentations


Presentation on theme: "Finnish DataGrid meeting, CSC, Otaniemi,28.8.2000 V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000."— Presentation transcript:

1 Finnish DataGrid meeting, CSC, Otaniemi,28.8.2000 V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000 Data Intensive Computing in CMS Experiment

2 Finnish DataGrid meeting, CSC, Otaniemi,28.8.2000 V. Karimäki (HIP) Outline of the Talk  LHC computing challenge  Hardware challenge  CMS software  DataBase management system  Regional Centres  DataGrid WP 8 in CMS  Summary

3 Finnish DataGrid meeting, CSC, Otaniemi,28.8.2000 V. Karimäki (HIP) Challenge: Collision rates

4 Finnish DataGrid meeting, CSC, Otaniemi,28.8.2000 V. Karimäki (HIP) Challenges: Event complexity Events: è Signal event is obscured by 20 overlapping uninteresting collisions in same crossing è Track reconstruction time at 10 34 Luminosity several times 10 33 è Time does not scale from previous generations

5 Finnish DataGrid meeting, CSC, Otaniemi,28.8.2000 V. Karimäki (HIP) Challenges: Geographical dispersion è Geographical dispersion: of people and resources è Complexity: the detector and the LHC environment è Scale: Petabytes per year of data 1800 Physicists 150 Institutes 32 Countries Major challenges associated with:  Coordinated Use of Distributed computing resources  Remote software development and physics analysis  Communication and collaboration at a distance R&D: New Forms of Distributed Systems R&D: New Forms of Distributed Systems

6 Finnish DataGrid meeting, CSC, Otaniemi,28.8.2000 V. Karimäki (HIP) Challenges: Data Rates online system multi-level trigger filter out background reduce data volume level 1 - special hardware 40 MHz (40 TB/sec) level 2 - embedded processors level 3 - PCs 75 KHz (75 GB/sec) 5 KHz (5 GB/sec) 100 Hz (100 MB/sec) data recording & offline analysis

7 Finnish DataGrid meeting, CSC, Otaniemi,28.8.2000 V. Karimäki (HIP) PetaByte Mass Storage Each silo has 6,000 slots, each of which can hold a 50GB cartridge ==> theoretical capacity : 1.2 PetaBytes

8 Finnish DataGrid meeting, CSC, Otaniemi,28.8.2000 V. Karimäki (HIP) The new Supercomputer? From http://now.cs.berkeley.edu (The Berkeley NOW project)

9 Finnish DataGrid meeting, CSC, Otaniemi,28.8.2000 V. Karimäki (HIP) Event Parallel Processing System About 250 PCs, with 500 Pentium processors are currently installed for offline physics data processing

10 Finnish DataGrid meeting, CSC, Otaniemi,28.8.2000 V. Karimäki (HIP) Cost Evolution: CMS 1996 Versus 1999 Technology Tracking Team Compare to 1999 Technology Tracking Team Projections for 2005  CPU: Unit cost will be close to early prediction  Disk: Will be more expensive (by ~2) than early prediction  Tape: Currently Zero to 10% Annual Cost Decrease (Potential Problem) CMS 1996 Estimates

11 Finnish DataGrid meeting, CSC, Otaniemi,28.8.2000 V. Karimäki (HIP) Data Challenge plans in CMS Dec 2000: Level 1 trigger TDR è First large-scale productions for trigger studies Dec 2001: DAQ TDR è Continue High Level Trigger studies; Production at Tier0 and Tier1s Dec 2002: Software and Computing TDR è First large-scale Data Challenge (5%) è Use full chain from online farms to production in Tier0, 1, 2 centers Dec 2003: Physics TDR è Test physics performance; need to produce large amounts of data è Verify technology choices by performing distributed analysis Dec 2004: Second large-scale Data Challenge (20%) è Final test of scalability of the fully distributed CMS computing system

12 Finnish DataGrid meeting, CSC, Otaniemi,28.8.2000 V. Karimäki (HIP) Hardware - CMS computing 10 ~ 120 MCHF Total Computing cost to 2006 inclusive ~ consistent with canonical 1/3 : 2/3 rule ~ 40 MCHF (Central systems at CERN) ~ 40 MCHF (~5 Regional Centres each ~20% of central systems) ~ 40 MCHF (?) (Universities, Tier2 centres, MC, etc..)

13 Finnish DataGrid meeting, CSC, Otaniemi,28.8.2000 V. Karimäki (HIP) Computing tasks - Software Off-line computing  Detector simulationOSCAR  Physics simulation CMKIN  Calibration  Event reconstructionORCA  and Analysis  Event visualisationIGUANA

14 Finnish DataGrid meeting, CSC, Otaniemi,28.8.2000 V. Karimäki (HIP) CMS Software Milestones We are well in schedule!

15 Finnish DataGrid meeting, CSC, Otaniemi,28.8.2000 V. Karimäki (HIP) Worldwide Computing Plan Tier2 Center ~1 TIPS Online System Offline Farm, CERN Computer Center > 20 TIPS Fermilab ~ 4 TIPS France Regional Center Italy Regional Center UK Regional Center Institute Institute ~0.25TIPS Workstations ~100 MBytes/sec ~2.4 Gbits/sec 100 - 1000 Mbits/sec Bunch crossing per 25 nsecs. 100 triggers per second Event is ~1 MByte in size Physicists work on analysis “channels”. Each institute has ~10 physicists working on one or more channels Data for these channels should be cached by the institute server Physics data cache ~PBytes/sec ~622 Mbits/sec or Air Freight Tier2 Center ~1 TIPS ~622 Mbits/sec Tier 0 +1 Tier 1 Tier 3 Tier 4 Tier2 Center ~ ~ 1 TIPS Tier 2 1 TIPS = 25,000 SpecInt95 PC (1999) = 15 SpecInt95

16 Finnish DataGrid meeting, CSC, Otaniemi,28.8.2000 V. Karimäki (HIP) Computing at Regional Centers CERN/CMS 350k SI95 350 TB Disk Robot FNAL/BNL 70k SI95 70 TB Disk Robot Tier0 Tier1 Tier2 Center 20k Si95 20 TB Disk Robot 622 Mb/sN x 622 Mb/s Tier3 UnivWG 1 Tier3 UnivWG 2 Tier3 UnivWG N Model Circa 2005

17 Finnish DataGrid meeting, CSC, Otaniemi,28.8.2000 V. Karimäki (HIP) Tapes Network from CERN Network from Tier 2 & simulation centers Tape Mass Storage & Disk Servers Database Servers Physics Software Development R&D Systems and Testbeds Info servers Code servers Web Servers Telepresence Servers Training Consulting Help Desk Production Reconstruction Raw/Sim  ESD Scheduled, predictable experiment/ physics groups Production Analysis ESD  AOD AOD  DPD Scheduled Physics groups Individual Analysis AOD  DPD and plots Chaotic Physicists Desktops Tier 2 Local institutes CERN Tapes Regional Centre Architecture Example by I. Gaines

18 Finnish DataGrid meeting, CSC, Otaniemi,28.8.2000 V. Karimäki (HIP) CMS production 2000 - Grid WP 8 Signal Zebra files with HITS ORCA Digitization (merge signal and MB) Objectivity Database HEPEVT ntuples CMSIM HLT Algorithms New Reconstructed Objects MC Prod. ORCA Prod. HLT Grp Databases ORCA ooHit Formatter Objectivity Database MB Objectivity Database Catalog import Objectivity Database Objectivity Database ytivitcejbOesabataD Mirrored Db’s (US, Russia, Italy..)

19 Finnish DataGrid meeting, CSC, Otaniemi,28.8.2000 V. Karimäki (HIP) Summary  Challenges: high rates, large data sets, complexity, world wide dispersion, cost  Solutions: event parallism, commodity components, computing modelling, distributed computing, OO paradigm, OO database  Planning: CMS in schedules with various milestones  DataGrid WP 8: production of large number of events in fall 2000


Download ppt "Finnish DataGrid meeting, CSC, Otaniemi,28.8.2000 V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000."

Similar presentations


Ads by Google