Internet2 Presentation (Oct. 11, 2007)Paul Avery 1 Paul Avery University of Florida Internet2 Meeting San Diego, CA October 11, 2007.

Slides:



Advertisements
Similar presentations
International Grid Communities Dr. Carl Kesselman Information Sciences Institute University of Southern California.
Advertisements

Introduction to CMS computing CMS for summer students 7/7/09 Oliver Gutsche, Fermilab.
Highest Energy e + e – Collider LEP at CERN GeV ~4km radius First e + e – Collider ADA in Frascati GeV ~1m radius e + e – Colliders.
Randall Sobie The ATLAS Experiment Randall Sobie Institute for Particle Physics University of Victoria Large Hadron Collider (LHC) at CERN Laboratory ATLAS.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
1 Global Grid Efforts Richard Cavanaugh University of Florida.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 15 th April 2009 Visit of Spanish Royal Academy.
HEP Prospects, J. Yu LEARN Strategy Meeting Prospects on Texas High Energy Physics Network Needs LEARN Strategy Meeting University of Texas at El Paso.
CERN/IT/DB Multi-PB Distributed Databases Jamie Shiers IT Division, DB Group, CERN, Geneva, Switzerland February 2001.
RIT Colloquium (May 23, 2007)Paul Avery 1 Paul Avery University of Florida Physics Colloquium RIT (Rochester, NY) May 23, 2007 Open.
Open Science Ruth Pordes Fermilab, July 17th 2006 What is OSG Where Networking fits Middleware Security Networking & OSG Outline.
GridPP Steve Lloyd, Chair of the GridPP Collaboration Board.
CANS Meeting (December 1, 2004)Paul Avery1 University of Florida UltraLight U.S. Grid Projects and Open Science Grid Chinese American.
1 ESnet Planning for the LHC T0-T1 Networking William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
Particle Physics and the Grid Randall Sobie Institute of Particle Physics University of Victoria Motivation Computing challenge LHC Grid Canadian requirements.
Open Science Grid For CI-Days Internet2: Fall Member Meeting, 2007 John McGee – OSG Engagement Manager Renaissance Computing Institute.
HEP and Data Grids (Aug. 4-5, 2001)Paul Avery1 High Energy Physics and Data Grids Paul Avery University of Florida
CERN TERENA Lisbon The Grid Project Fabrizio Gagliardi CERN Information Technology Division May, 2000
1 Open Science Grid.. An introduction Ruth Pordes Fermilab.
1 Kittikul Kovitanggoon*, Burin Asavapibhop, Narumon Suwonjandee, Gurpreet Singh Chulalongkorn University, Thailand July 23, 2015 Workshop on e-Science.
Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.
Tier 1 Facility Status and Current Activities Rich Baker Brookhaven National Laboratory NSF/DOE Review of ATLAS Computing June 20, 2002.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
HENP, Grids and the Networks They Depend Upon Shawn McKee March 2004 National Internet2 Day.
Open Science Grid  Consortium of many organizations (multiple disciplines)  Production grid cyberinfrastructure  80+ sites, 25,000+ CPU.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 25 th April 2012.
Thoughts on Future LHCOPN Some ideas Artur Barczyk, Vancouver, 31/08/09.
1 High Energy Physics (HEP) Computing HyangKyu Park Kyungpook National University Daegu, Korea 2008 Supercomputing & KREONET Workshop Ramada Hotel, JeJu,
Data Grid projects in HENP R. Pordes, Fermilab Many HENP projects are working on the infrastructure for global distributed simulated data production, data.
10/24/2015OSG at CANS1 Open Science Grid Ruth Pordes Fermilab
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
Data Logistics in Particle Physics Ready or Not, Here it Comes… Prof. Paul Sheldon Vanderbilt University Prof. Paul Sheldon Vanderbilt University.
Brussels Grid Meeting (Mar. 23, 2001)Paul Avery1 University of Florida Extending the Grid Reach in Europe.
EGEE is a project funded by the European Union under contract IST HEP Use Cases for Grid Computing J. A. Templon Undecided (NIKHEF) Grid Tutorial,
Jürgen Knobloch/CERN Slide 1 A Global Computer – the Grid Is Reality by Jürgen Knobloch October 31, 2007.
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
…building the next IT revolution From Web to Grid…
Silicon Module Tests The modules are tested in the production labs. HIP is is participating in the rod quality tests at CERN. The plan of HIP CMS is to.
The LHC Computing Grid – February 2008 The Challenges of LHC Computing Dr Ian Bird LCG Project Leader 6 th October 2009 Telecom 2009 Youth Forum.
The LHCb CERN R. Graciani (U. de Barcelona, Spain) for the LHCb Collaboration International ICFA Workshop on Digital Divide Mexico City, October.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
OSG Consortium Meeting (January 23, 2006)Paul Avery1 University of Florida Open Science Grid Progress Linking Universities and Laboratories.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
High Energy Physics and Grids at UF (Dec. 13, 2002)Paul Avery1 University of Florida High Energy Physics.
GriPhyN EAC Meeting (Jan. 7, 2002)Paul Avery1 Integration with iVDGL è International Virtual-Data Grid Laboratory  A global Grid laboratory (US, EU, Asia,
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
US LHC NWG Dynamic Circuit Services in US LHCNet Artur Barczyk, Caltech Joint Techs Workshop Honolulu, 01/23/2008.
LHC Computing, CERN, & Federated Identities
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Designing Cyberinfrastructure (Jan , 2007)Paul Avery 1 Paul Avery University of Florida Cyberinfrastructure Workshop National.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 1 st March 2011 Visit of Dr Manuel Eduardo Baldeón.
1 Open Science Grid.. An introduction Ruth Pordes Fermilab.
1 Open Science Grid: Project Statement & Vision Transform compute and data intensive science through a cross- domain self-managed national distributed.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
1 Particle Physics Data Grid (PPDG) project Les Cottrell – SLAC Presented at the NGI workshop, Berkeley, 7/21/99.
Collaborative Research Projects in Australia: High Energy Physicists Dr. Greg Wickham (AARNet) Dr. Glenn Moloney (University of Melbourne) Global Collaborations.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
10-Feb-00 CERN HepCCC Grid Initiative ATLAS meeting – 16 February 2000 Les Robertson CERN/IT.
J. Bunn, D. Nae, H. Newman, S. Ravot, X. Su, Y. Xia California Institute of Technology US LHCNet LHCNet WG September 12 th 2006.
Hall D Computing Facilities Ian Bird 16 March 2001.
LHC collisions rate: Hz New PHYSICS rate: Hz Event selection: 1 in 10,000,000,000,000 Signal/Noise: Raw Data volumes produced.
] Open Science Grid Ben Clifford University of Chicago
R. Ruchti DPF October 2006 Broader Impacts in Particle Physics A Responsibility and an Opportunity R. Ruchti National Science Foundation and University.
The LHC Computing Grid Visit of Mtro. Enrique Agüera Ibañez
LHC DATA ANALYSIS INFN (LNL – PADOVA)
CERN, the LHC and the Grid
High Energy Physics at UTA
High Energy Physics at UTA
Gridifying the LHCb Monte Carlo production system
Presentation transcript:

Internet2 Presentation (Oct. 11, 2007)Paul Avery 1 Paul Avery University of Florida Internet2 Meeting San Diego, CA October 11, High Energy & Nuclear Physics Experiments and Advanced Cyberinfrastructure

Internet2 Presentation (Oct. 11, 2007)Paul Avery 2 Context: Open Science Grid  Consortium of many organizations (multiple disciplines)  Production grid cyberinfrastructure  75+ sites, 30,000+ CPUs: US, UK, Brazil, Taiwan

Internet2 Presentation (Oct. 11, 2007)Paul Avery 3 OSG Science Drivers  Experiments at Large Hadron Collider  New fundamental particles and forces  100s of petabytes ?  High Energy & Nuclear Physics expts  Top quark, nuclear matter at extreme density  ~10 petabytes 1997 – present  LIGO (gravity wave search)  Search for gravitational waves  ~few petabytes2002 – present Data growth Community growth Future Grid resources  Massive CPU (PetaOps)  Large distributed datasets (>100PB)  Global communities (1000s)  International optical networks

Internet2 Presentation (Oct. 11, 2007)Paul Avery 4 OSG History in Context Primary Drivers: LHC and LIGO PPDG GriPhyN iVDGL TrilliumGrid3 OSG (DOE) (DOE+NSF) (NSF) European Grid + Worldwide LHC Computing Grid Campus, regional grids LHC Ops LHC construction, preparation, commissioning LIGO operation LIGO preparation

Internet2 Presentation (Oct. 11, 2007)Paul Avery 5 Search for  Origin of Mass  New fundamental forces  Supersymmetry  Other new particles  2008 – ? TOTEM LHCb ALICE  27 km Tunnel in Switzerland & France CMS ATLAS LHC Experiments at CERN

Internet2 Presentation (Oct. 11, 2007)Paul Avery 6 Particle Proton  Proton 2835 bunch/beam Protons/bunch10 11 Beam energy7 TeV x 7 TeV Luminosity10 34 cm  2 s  1 Crossing rate every 25 nsec  Collision rate ~10 9 Hz  New physics rate ~10  5 Hz  Selection: 1 in Parton (quark, gluon) Proton l l jet Bunch Collisions at LHC (2008  ?) (~20 Collisions/Crossing)

Internet2 Presentation (Oct. 11, 2007)Paul Avery 7 CMS ATLAS LHCb Storage  Raw recording rate 0.2 – 1.5 GB/s  Large Monte Carlo data samples  100 PB by ~2012  1000 PB later in decade? Processing  PetaOps (> 600,000 3 GHz cores) Users  100s of institutes  1000s of researchers LHC Data and CPU Requirements

Internet2 Presentation (Oct. 11, 2007)Paul Avery 8 ATLAS CMS LHC Global Collaborations  2000 – 3000 physicists per experiment  USA is 20–31% of total

Internet2 Presentation (Oct. 11, 2007)Paul Avery 9 CMS Experiment LHC Global Grid Online System CERN Computer Center FermiLab Korea Russia UK Maryland MB/s >10 Gb/s Gb/s Gb/s Tier 0 Tier 1 Tier 3 Tier 2 Physics caches PCs Iowa UCSDCaltech U Florida  5000 physicists, 60 countries  10s of Petabytes/yr by 2009  CERN / Outside = 10-20% FIU Tier 4 OSG

Internet2 Presentation (Oct. 11, 2007)Paul Avery 10 LHC Global Grid  11 Tier-1 sites  112 Tier-2 sites (growing)  100s of universities J. Knobloch

Internet2 Presentation (Oct. 11, 2007)Paul Avery 11 LHC Cyberinfrastructure Growth: CPU CERN Tier-1 Tier-2 ~100,000 cores  Multi-core boxes  AC & power challenges

Internet2 Presentation (Oct. 11, 2007)Paul Avery 12 LHC Cyberinfrastructure Growth: Disk 100 Petabytes Disk CERN Tier-1 Tier-2

Internet2 Presentation (Oct. 11, 2007)Paul Avery 13 LHC Cyberinfrastructure Growth: Tape 100 Petabytes Tape CERN Tier-1

Internet2 Presentation (Oct. 11, 2007)Paul Avery 14 HENP Bandwidth Roadmap for Major Links (in Gbps) Paralleled by ESnet roadmap 

Internet2 Presentation (Oct. 11, 2007)Paul Avery 15 HENP Collaboration with Internet2 HENP SIG

Internet2 Presentation (Oct. 11, 2007)Paul Avery 16  UltraLight and other networking initiatives  Spawning state-wide and regional networks (FLR, SURA, LONI, …) HENP Collaboration with NLR

Internet2 Presentation (Oct. 11, 2007)Paul Avery 17 US LHCNet, ESnet Plan : 30  80 Gbps US-CERN DEN ELP ALB ATL Metropolitan Area Rings Aus. Europe SDG AsiaPac SEA Major DOE Office of Science Sites High-speed cross connects with Internet2/Abilene New ESnet hubs ESnet hubs SNV Europe Japan Science Data Network core, Gbps circuit transport Lab supplied Major international Production IP ESnet core, 10 Gbps enterprise IP traffic Japan Aus. Metro Rings ESnet4 SDN Core: 30-50Gbps ESnet IP Core ≥10 Gbps 10Gb/s 30Gb/s 2 x 10Gb/s NYC CHI US-LHCNet Network Plan (3 to 8 x 10 Gbps US-CERN ) LHCNet Data Network DC GEANT2 SURFNet IN2P3 NSF/IRNC circuit; GVA-AMS connection via Surfnet or Geant2 CERN FNAL BNL US-LHCNet: NY-CHI-GVA-AMS : 30, 40, 60, 80 Gbps ESNet MANs to FNAL & BNL; Dark Fiber to FNAL; Peering With GEANT

Internet2 Presentation (Oct. 11, 2007)Paul Avery 18 CSA06 Tier1–Tier2 Data Transfers: 2006–07 1 GB/sec Sep Sep Mar. 2007

Internet2 Presentation (Oct. 11, 2007)Paul Avery 19 Computing, Offline and CSA07 Nebraska One well configured site. But ~10 such sites in near future  network challenge US: FNAL Transfer Rates to Tier-2 Universities 1 GB/s June 2007

Internet2 Presentation (Oct. 11, 2007)Paul Avery 20 Current Data Transfer Experience  Transfers are generally much slower than expected  Or stop altogether  Potential causes difficult to diagnose  Configuration problem? Loading? Queuing?  Database errors, experiment S/W error, grid S/W error?  End-host problem? Network problem? Application failure?  Complicated recovery  Insufficient information  Too slow to diagnose and correlate at the time the error occurs  Result  Lower transfer rates, longer troubleshooting times  Need intelligent services, smart end-host systems

Internet2 Presentation (Oct. 11, 2007)Paul Avery 21 UltraLight 10 Gb/s+ network Caltech, UF, FIU, UM, MIT SLAC, FNAL Int’l partners Level(3), Cisco, NLR Funded by NSF Integrating Advanced Networking in Applications

Internet2 Presentation (Oct. 11, 2007)Paul Avery 22 UltraLight Testbed Funded by NSF

Internet2 Presentation (Oct. 11, 2007)Paul Avery 23 Many Near-Term Challenges  Network  Bandwidth, bandwidth, bandwidth  Need for intelligent services, automation  More efficient utilization of network (protocols, NICs, S/W clients, pervasive monitoring)  Better collaborative tools  Distributed authentication?  Scalable services: automation  Scalable support

Internet2 Presentation (Oct. 11, 2007)Paul Avery 24 END

Internet2 Presentation (Oct. 11, 2007)Paul Avery 25 Extra Slides

Internet2 Presentation (Oct. 11, 2007)Paul Avery 26 The Open Science Grid Consortium Open Science Grid U.S. grid projects LHC experiments Laboratory centers Education communities Science projects & communities Technologists (Network, HPC, …) Computer Science University facilities Multi-disciplinary facilities Regional and campus grids

Internet2 Presentation (Oct. 11, 2007)Paul Avery 27 CMS: “Compact” Muon Solenoid Inconsequential humans

Internet2 Presentation (Oct. 11, 2007)Paul Avery 28 All charged tracks with pt > 2 GeV Reconstructed tracks with pt > 25 GeV (+30 minimum bias events) 10 9 collisions/sec, selectivity: 1 in Collision Complexity: CPU + Storage

Internet2 Presentation (Oct. 11, 2007)Paul Avery 29 LHC Data Rates: Detector to Storage Level 1 Trigger: Special Hardware 40 MHz 75 KHz 75 GB/sec 5 KHz 5 GB/sec Level 2 Trigger: Commodity CPUs 100 Hz 0.15 – 1.5 GB/sec Level 3 Trigger: Commodity CPUs Raw Data to storage (+ simulated data) Physics filtering ~TBytes/sec

Internet2 Presentation (Oct. 11, 2007)Paul Avery 30 LIGO: Search for Gravity Waves  LIGO Grid  6 US sites  3 EU sites (UK & Germany) * LHO, LLO: LIGO observatory sites * LSC: LIGO Scientific Collaboration  Cardiff AEI/Golm Birmingham

Internet2 Presentation (Oct. 11, 2007)Paul Avery 31 Is HEP Approaching Productivity Plateau? Padova 2000 Beijing 2001 San Diego 2003 Interlachen 2004 Mumbai 2006 Victoria 2007 Expectations From Les Robertson Gartner Group (CHEP Conferences) The Technology Hype Cycle Applied to HEP Grids

Internet2 Presentation (Oct. 11, 2007)Paul Avery 32 Challenges from Diversity and Growth  Management of an increasingly diverse enterprise  Sci/Eng projects, organizations, disciplines as distinct cultures  Accommodating new member communities (expectations?)  Interoperation with other grids  TeraGrid  International partners (EGEE, NorduGrid, etc.)  Multiple campus and regional grids  Education, outreach and training  Training for researchers, students  … but also project PIs, program officers  Operating a rapidly growing cyberinfrastructure  25K  100K CPUs, 4  10 PB disk  Management of and access to rapidly increasing data stores (slide)  Monitoring, accounting, achieving high utilization  Scalability of support model (slide)

Internet2 Presentation (Oct. 11, 2007)Paul Avery 33 Collaborative Tools: EVO Videoconferencing End-to-End Self Managed Infrastructure

Internet2 Presentation (Oct. 11, 2007)Paul Avery 34 REDDnet: National Networked Storage  NSF funded project  Vanderbilt  8 initial sites  Multiple disciplines  Satellite imagery  HENP  Terascale Supernova Initative  Structural Biology  Bioinformatics  Storage  500TB disk  200TB tape Brazil?

Internet2 Presentation (Oct. 11, 2007)Paul Avery 35 OSG Operations Model Distributed model  Scalability!  VOs, sites, providers  Rigorous problem tracking & routing  Security  Provisioning  Monitoring  Reporting Partners with EGEE operations