Highest Energy e + e – Collider LEP at CERN GeV ~4km radius First e + e – Collider ADA in Frascati GeV ~1m radius e + e – Colliders
First Beams: April 2007 Physics Runs: from Summer 2007 TOTEM pp, general purpose; HI LHCb: B-physics ALICE : HI pp s =14 TeV L=10 34 cm -2 s -1 Heavy ions CMS at LHC: 2007 Start CMS LHC Schedule Reconfirmed at CERN Council June 2003 ATLAS
HCAL Barrels Done: Installing HCAL Endcap and Muon CSCs in SX5 36 Muon CSCs successfully installed on YE-2,3. Avg. rate 6/day (planned 4/day). Cabling+commissioning. HE-1 complete, HE+ will be mounted in Q4 2003
Large Hadron Collider (LHC) Bunch Crossing cm -2 s -1 Luminosity 2835 Bunches/Beam Protons/Bunch 14 TeV Proton Proton Collisions Parton Collisions Higgs Production 7.5 m (25 ns) ~10000 per day 4x10 7 Hz 10 9 Hz
LHC Magnets 9 Tesla field Dipoles separated by 20cm Cooled to superfluid liquid helium temperatures 20 km of magnets
LHC Magnets
LHC Detectors B-physics CP Violation Heavy Ions Quark-gluon plasma CMS ATLAS
LHC CERN Laboratory in Geneva, Switzerland
LHC CMS Detector
LHC 300 foot shaft
LHC CMS Cavern (300 feet underground)
LHC
online system multi-level trigger filter out background reduce data volume level 1 - special hardware 40 MHz (80 TB/sec) level 2 - embedded processors level 3 - PCs 75 KHz (75 GB/sec) 5 KHz (5 GB/sec) 100 Hz ( MB/sec) data processing offline analysis, selection One of the four LHC detectors (CMS) Raw recording rate 0.1 – 1 GB/sec PetaBytes / year LHC Computing: Different from Previous Experiment Generations
Tier2 Center Online System Offline Farm, CERN Computer France Center FNAL Center Italy Center UK Center Institut e Institute ~0.25TIPS Workstations 100–1000 MBytes/sec ~2.4 Gbits/sec Mbits/sec Bunch crossing per 25 nsecs. Event is ~1 MByte in size Physicists work on analysis “channels”. Processing power: ~200,000 of today’s fastest PCs Physics data cache ~PBytes/sec ~ Gbits/sec Tier2 Center ~622 Mbits/sec Tier 0 +1 Tier 1 Tier 3 Tier 4 Tier2 Center Tier 2 Experiment Regional Center Hierarchy (Worldwide Data Grid)
Production BW Growth of Int’l HENP Network Links (US-CERN Example) u Rate of Progress >> Moore’s Law. (US-CERN Example) è 9.6 kbps Analog(1985) è kbps Digital ( ) [X 7 – 27] è 1.5 Mbps Shared (1990-3; IBM) [X 160] è 2 -4 Mbps( ) [X ] è Mbps ( ) [X 1.2k-2k] è Mbps (2001-2) [X 16k – 32k] è 622 Mbps(2002-3) [X 65k] è 2.5 Gbps (2003-4) [X 250k] è 10 Gbps (2005) [X 1M] u A factor of ~1M over a period of (a factor of ~5k during ) u HENP has become a leading applications driver, and also a co-developer of global networks
HENP Major Links: Bandwidth Roadmap (Scenario) in Gbps Continuing the Trend: ~1000 Times Bandwidth Growth Per Decade; We are Rapidly Learning to Use Multi-Gbps Networks Dynamically
History – One large Research Site Current Traffic to ~400 Mbps; Projections: 0.5 to 24 Tbps by ~2012 Much of the Traffic: SLAC IN2P3/RAL/INFN; via ESnet+France; Abilene+CERN
Digital Divide Illustrated by Network Infrastructures: TERENA NREN Core Capacity Core capacity goes up in Large Steps: 10 to 20 Gbps; 2.5 to 10 Gbps; to 2.5 Gbps Current In Two Years SE Europe, Medit., FSU, Middle East: Less Progress Based on Older Technologies (Below 0.15, 1.0 Gbps): Digital Divide Will Not Be Closed Source: TEREN A
The Global Lambda Integrated Facility for Research and Education (GLIF) u Virtual organization supports persistent data-intensive scientific research and middleware development on “LambdaGrids” u Grid applications “ride” on dynamically configured networks based on optical wavelengths. u Architecting an international LambdaGrid infrastructure