Download presentation
Presentation is loading. Please wait.
Published byKatrina Carson Modified over 9 years ago
1
Highest Energy e + e – Collider LEP at CERN 1989-2000 200 GeV ~4km radius First e + e – Collider ADA in Frascati 1961 0.2 GeV ~1m radius e + e – Colliders
2
First Beams: April 2007 Physics Runs: from Summer 2007 TOTEM pp, general purpose; HI LHCb: B-physics ALICE : HI pp s =14 TeV L=10 34 cm -2 s -1 Heavy ions CMS at LHC: 2007 Start CMS LHC Schedule Reconfirmed at CERN Council June 2003 ATLAS
3
HCAL Barrels Done: Installing HCAL Endcap and Muon CSCs in SX5 36 Muon CSCs successfully installed on YE-2,3. Avg. rate 6/day (planned 4/day). Cabling+commissioning. HE-1 complete, HE+ will be mounted in Q4 2003
4
Large Hadron Collider (LHC) Bunch Crossing 10 34 cm -2 s -1 Luminosity 2835 Bunches/Beam 10 11 Protons/Bunch 14 TeV Proton Proton Collisions Parton Collisions Higgs Production 7.5 m (25 ns) ~10000 per day 4x10 7 Hz 10 9 Hz
5
LHC Magnets 9 Tesla field Dipoles separated by 20cm Cooled to superfluid liquid helium temperatures 20 km of magnets
6
LHC Magnets
7
LHC Detectors B-physics CP Violation Heavy Ions Quark-gluon plasma CMS ATLAS
8
LHC CERN Laboratory in Geneva, Switzerland
9
LHC CMS Detector
10
LHC 300 foot shaft
11
LHC CMS Cavern (300 feet underground)
12
LHC
13
online system multi-level trigger filter out background reduce data volume level 1 - special hardware 40 MHz (80 TB/sec) level 2 - embedded processors level 3 - PCs 75 KHz (75 GB/sec) 5 KHz (5 GB/sec) 100 Hz (100-1000 MB/sec) data processing offline analysis, selection One of the four LHC detectors (CMS) Raw recording rate 0.1 – 1 GB/sec 3 - 8 PetaBytes / year LHC Computing: Different from Previous Experiment Generations
14
Tier2 Center Online System Offline Farm, CERN Computer France Center FNAL Center Italy Center UK Center Institut e Institute ~0.25TIPS Workstations 100–1000 MBytes/sec ~2.4 Gbits/sec 100 - 1000 Mbits/sec Bunch crossing per 25 nsecs. Event is ~1 MByte in size Physicists work on analysis “channels”. Processing power: ~200,000 of today’s fastest PCs Physics data cache ~PBytes/sec ~0.6 - 2.5 Gbits/sec Tier2 Center ~622 Mbits/sec Tier 0 +1 Tier 1 Tier 3 Tier 4 Tier2 Center Tier 2 Experiment Regional Center Hierarchy (Worldwide Data Grid)
15
Production BW Growth of Int’l HENP Network Links (US-CERN Example) u Rate of Progress >> Moore’s Law. (US-CERN Example) è 9.6 kbps Analog(1985) è 64-256 kbps Digital (1989 - 1994) [X 7 – 27] è 1.5 Mbps Shared (1990-3; IBM) [X 160] è 2 -4 Mbps(1996-1998) [X 200- 400] è 12-20 Mbps (1999-2000) [X 1.2k-2k] è 155-310 Mbps (2001-2) [X 16k – 32k] è 622 Mbps(2002-3) [X 65k] è 2.5 Gbps (2003-4) [X 250k] è 10 Gbps (2005) [X 1M] u A factor of ~1M over a period of 1985-2005 (a factor of ~5k during 1995-2005) u HENP has become a leading applications driver, and also a co-developer of global networks
16
HENP Major Links: Bandwidth Roadmap (Scenario) in Gbps Continuing the Trend: ~1000 Times Bandwidth Growth Per Decade; We are Rapidly Learning to Use Multi-Gbps Networks Dynamically
17
History – One large Research Site Current Traffic to ~400 Mbps; Projections: 0.5 to 24 Tbps by ~2012 Much of the Traffic: SLAC IN2P3/RAL/INFN; via ESnet+France; Abilene+CERN
18
Digital Divide Illustrated by Network Infrastructures: TERENA NREN Core Capacity Core capacity goes up in Large Steps: 10 to 20 Gbps; 2.5 to 10 Gbps; 0.6-1 to 2.5 Gbps Current In Two Years SE Europe, Medit., FSU, Middle East: Less Progress Based on Older Technologies (Below 0.15, 1.0 Gbps): Digital Divide Will Not Be Closed Source: TEREN A
19
The Global Lambda Integrated Facility for Research and Education (GLIF) u Virtual organization supports persistent data-intensive scientific research and middleware development on “LambdaGrids” u Grid applications “ride” on dynamically configured networks based on optical wavelengths. u Architecting an international LambdaGrid infrastructure
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.